Test Report: KVM_Linux_containerd 12739

                    
                      24e369002aeb518840e093d9fb528e6077bdad6e:2021-11-18:21393
                    
                

Test fail (6/285)

x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:3.3: exit status 10 (60.453726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.3": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.3
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_de8128d312e6d2ac89c1c5074cd22b7974c28c2b_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.3". args "out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:3.3" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:latest
functional_test.go:983: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:latest: exit status 10 (64.732061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_latest": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:latest
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_5aa7605f63066fc2b7f8379478b9def700202ac8_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:latest". args "out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add k8s.gcr.io/pause:latest" err exit status 10
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3: exit status 30 (59.398458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_3.3: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_e17e40910561608ab15e9700ab84b4e1db856f38_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1041: failed to delete image k8s.gcr.io/pause:3.3 from cache. args "out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl images
functional_test.go:1067: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	IMAGE                                         TAG                               IMAGE ID            SIZE
	docker.io/kindest/kindnetd                    v20210326-1e038dc5                6de166512aa22       54MB
	docker.io/kubernetesui/dashboard              v2.3.1                            e1482a24335a6       66.9MB
	docker.io/kubernetesui/metrics-scraper        v1.0.7                            7801cfc6d5c07       15MB
	docker.io/library/minikube-local-cache-test   functional-20211117234207-20973   d019ff3187ef5       1.74kB
	gcr.io/k8s-minikube/storage-provisioner       v5                                6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                    v1.8.4                            8d147537fb7d1       13.7MB
	k8s.gcr.io/etcd                               3.5.0-0                           0048118155842       99.9MB
	k8s.gcr.io/kube-apiserver                     v1.22.3                           53224b502ea4d       31.2MB
	k8s.gcr.io/kube-controller-manager            v1.22.3                           05c905cef780c       29.8MB
	k8s.gcr.io/kube-proxy                         v1.22.3                           6120bd723dced       35.9MB
	k8s.gcr.io/kube-scheduler                     v1.22.3                           0aa9c7e31d307       15MB
	k8s.gcr.io/pause                              3.1                               da86e6ba6ca19       353kB
	k8s.gcr.io/pause                              3.5                               ed210e3e4a5ba       301kB

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl rmi k8s.gcr.io/pause:latest: exit status 1 (224.140644ms)

                                                
                                                
-- stdout --
	ERRO[0000] no such image k8s.gcr.io/pause:latest        
	FATA[0000] unable to remove the image(s)                

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1087: failed to manually delete image "out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl rmi k8s.gcr.io/pause:latest" : exit status 1
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (227.702401ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1100: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (222.369377ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1102: expected "out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest: exit status 30 (56.803751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/images/k8s.gcr.io/pause_latest: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_d17bcf228b7a032ee268baa189bce7c5c7938c34_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:latest from cache. args "out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20211118002250-20973 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
  []string{
  	"gcr.io/k8s-minikube/storage-provisioner:v5",
  	"k8s.gcr.io/coredns/coredns:v1.8.4",
+ 	"k8s.gcr.io/echoserver:1.4",
  	"k8s.gcr.io/etcd:3.5.0-0",
  	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
  	... // 2 identical elements
  	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
  	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20211118002250-20973 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-20211118002250-20973 logs -n 25: (1.382177955s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | cert-options-20211118002144-20973               | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:21:45 UTC | Thu, 18 Nov 2021 00:23:05 UTC |
	|         | cert-options-20211118002144-20973                 |                                                 |         |         |                               |                               |
	|         | --memory=2048                                     |                                                 |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                 |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                 |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                 |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                 |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                 |         |         |                               |                               |
	| -p      | cert-options-20211118002144-20973                 | cert-options-20211118002144-20973               | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:23:05 UTC | Thu, 18 Nov 2021 00:23:05 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                 |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | cert-options-20211118002144-20973               | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:23:05 UTC | Thu, 18 Nov 2021 00:23:05 UTC |
	|         | cert-options-20211118002144-20973                 |                                                 |         |         |                               |                               |
	|         | -- sudo cat                                       |                                                 |         |         |                               |                               |
	|         | /etc/kubernetes/admin.conf                        |                                                 |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20211118002144-20973               | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:23:06 UTC | Thu, 18 Nov 2021 00:23:07 UTC |
	|         | cert-options-20211118002144-20973                 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:22:50 UTC | Thu, 18 Nov 2021 00:24:51 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.4-rc.0                 |                                                 |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20211118002307-20973                | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:23:07 UTC | Thu, 18 Nov 2021 00:24:53 UTC |
	|         | embed-certs-20211118002307-20973                  |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211118002307-20973                | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:01 UTC | Thu, 18 Nov 2021 00:25:02 UTC |
	|         | embed-certs-20211118002307-20973                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:02 UTC | Thu, 18 Nov 2021 00:25:03 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20211118002250-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:22:50 UTC | Thu, 18 Nov 2021 00:25:17 UTC |
	|         | old-k8s-version-20211118002250-20973              |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                 |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211118002250-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:27 UTC | Thu, 18 Nov 2021 00:25:27 UTC |
	|         | old-k8s-version-20211118002250-20973              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| start   | -p                                                | cert-expiration-20211118002119-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:24 UTC | Thu, 18 Nov 2021 00:25:39 UTC |
	|         | cert-expiration-20211118002119-20973              |                                                 |         |         |                               |                               |
	|         | --memory=2048                                     |                                                 |         |         |                               |                               |
	|         | --cert-expiration=8760h                           |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                 |         |         |                               |                               |
	| delete  | -p                                                | cert-expiration-20211118002119-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:39 UTC | Thu, 18 Nov 2021 00:25:40 UTC |
	|         | cert-expiration-20211118002119-20973              |                                                 |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20211118002540-20973      | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:40 UTC | Thu, 18 Nov 2021 00:25:40 UTC |
	|         | disable-driver-mounts-20211118002540-20973        |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211118002307-20973                | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:02 UTC | Thu, 18 Nov 2021 00:26:35 UTC |
	|         | embed-certs-20211118002307-20973                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211118002307-20973                | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:26:35 UTC | Thu, 18 Nov 2021 00:26:35 UTC |
	|         | embed-certs-20211118002307-20973                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:03 UTC | Thu, 18 Nov 2021 00:26:35 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:26:35 UTC | Thu, 18 Nov 2021 00:26:35 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211118002250-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:27 UTC | Thu, 18 Nov 2021 00:27:02 UTC |
	|         | old-k8s-version-20211118002250-20973              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211118002250-20973            | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:27:03 UTC | Thu, 18 Nov 2021 00:27:05 UTC |
	|         | old-k8s-version-20211118002250-20973              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20211118002540-20973 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:25:40 UTC | Thu, 18 Nov 2021 00:27:08 UTC |
	|         | default-k8s-different-port-20211118002540-20973   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                 |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.3                      |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211118002540-20973 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:27:17 UTC | Thu, 18 Nov 2021 00:27:18 UTC |
	|         | default-k8s-different-port-20211118002540-20973   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211118002540-20973 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:27:18 UTC | Thu, 18 Nov 2021 00:28:51 UTC |
	|         | default-k8s-different-port-20211118002540-20973   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211118002540-20973 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:28:51 UTC | Thu, 18 Nov 2021 00:28:51 UTC |
	|         | default-k8s-different-port-20211118002540-20973   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| start   | -p                                                | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:26:35 UTC | Thu, 18 Nov 2021 00:32:37 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.4-rc.0                 |                                                 |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20211118002250-20973                 | jenkins | v1.24.0 | Thu, 18 Nov 2021 00:32:56 UTC | Thu, 18 Nov 2021 00:32:57 UTC |
	|         | no-preload-20211118002250-20973                   |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/18 00:28:51
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1118 00:28:51.637616    8641 out.go:297] Setting OutFile to fd 1 ...
	I1118 00:28:51.637774    8641 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:28:51.637784    8641 out.go:310] Setting ErrFile to fd 2...
	I1118 00:28:51.637789    8641 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:28:51.637881    8641 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1118 00:28:51.638072    8641 out.go:304] Setting JSON to false
	I1118 00:28:51.679829    8641 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":7894,"bootTime":1637187438,"procs":184,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1118 00:28:51.679926    8641 start.go:122] virtualization: kvm guest
	I1118 00:28:51.682436    8641 out.go:176] * [default-k8s-different-port-20211118002540-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	I1118 00:28:51.684097    8641 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1118 00:28:51.682607    8641 notify.go:174] Checking for updates...
	I1118 00:28:51.685553    8641 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1118 00:28:51.686994    8641 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1118 00:28:51.688517    8641 out.go:176]   - MINIKUBE_LOCATION=12739
	I1118 00:28:51.688899    8641 config.go:176] Loaded profile config "default-k8s-different-port-20211118002540-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1118 00:28:51.689234    8641 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:28:51.689277    8641 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:28:51.703380    8641 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34311
	I1118 00:28:51.704495    8641 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:28:51.705239    8641 main.go:130] libmachine: Using API Version  1
	I1118 00:28:51.705260    8641 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:28:51.705596    8641 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:28:51.705802    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:28:51.705996    8641 driver.go:343] Setting default libvirt URI to qemu:///system
	I1118 00:28:51.706400    8641 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:28:51.706437    8641 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:28:51.717406    8641 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33129
	I1118 00:28:51.717717    8641 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:28:51.718082    8641 main.go:130] libmachine: Using API Version  1
	I1118 00:28:51.718098    8641 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:28:51.718400    8641 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:28:51.718653    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:28:51.747037    8641 out.go:176] * Using the kvm2 driver based on existing profile
	I1118 00:28:51.747059    8641 start.go:280] selected driver: kvm2
	I1118 00:28:51.747064    8641 start.go:775] validating driver "kvm2" against &{Name:default-k8s-different-port-20211118002540-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:defaul
t-k8s-different-port-20211118002540-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.83.2 Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1118 00:28:51.747167    8641 start.go:786] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1118 00:28:51.748105    8641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1118 00:28:51.748258    8641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1118 00:28:51.758744    8641 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.24.0
	I1118 00:28:51.759073    8641 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1118 00:28:51.759102    8641 cni.go:93] Creating CNI manager for ""
	I1118 00:28:51.759111    8641 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1118 00:28:51.759117    8641 start_flags.go:282] config:
	{Name:default-k8s-different-port-20211118002540-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211118002540-20973 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.83.2 Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1118 00:28:51.759199    8641 iso.go:123] acquiring lock: {Name:mk8cca007fc20acac1c2951039d04ddec7641ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1118 00:28:51.761149    8641 out.go:176] * Starting control plane node default-k8s-different-port-20211118002540-20973 in cluster default-k8s-different-port-20211118002540-20973
	I1118 00:28:51.761168    8641 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime containerd
	I1118 00:28:51.761192    8641 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4
	I1118 00:28:51.761202    8641 cache.go:57] Caching tarball of preloaded images
	I1118 00:28:51.761278    8641 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1118 00:28:51.761294    8641 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on containerd
	I1118 00:28:51.761387    8641 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/config.json ...
	I1118 00:28:51.761524    8641 cache.go:206] Successfully downloaded all kic artifacts
	I1118 00:28:51.761545    8641 start.go:313] acquiring machines lock for default-k8s-different-port-20211118002540-20973: {Name:mkdb50dbbe1b05f95907d97994d343d3b478f792 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1118 00:28:51.761588    8641 start.go:317] acquired machines lock for "default-k8s-different-port-20211118002540-20973" in 31.379µs
	I1118 00:28:51.761603    8641 start.go:93] Skipping create...Using existing machine configuration
	I1118 00:28:51.761610    8641 fix.go:55] fixHost starting: 
	I1118 00:28:51.761872    8641 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:28:51.761901    8641 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:28:51.771194    8641 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1118 00:28:51.771667    8641 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:28:51.772128    8641 main.go:130] libmachine: Using API Version  1
	I1118 00:28:51.772153    8641 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:28:51.772480    8641 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:28:51.772659    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:28:51.772810    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetState
	I1118 00:28:51.775432    8641 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211118002540-20973: state=Stopped err=<nil>
	I1118 00:28:51.775474    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	W1118 00:28:51.775628    8641 fix.go:134] unexpected machine state, will restart: <nil>
	I1118 00:28:52.671187    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:54.674974    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:50.943990    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:50.944570    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:51.443807    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:51.444355    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:51.943657    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:51.944287    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:52.443838    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:52.444473    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:52.944015    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:52.944596    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:53.444571    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:53.445145    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:53.943713    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:53.944512    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:54.443775    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:54.444350    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:54.943889    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:54.944443    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:55.444121    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:55.444708    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:51.646777    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:54.146508    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:51.777499    8641 out.go:176] * Restarting existing kvm2 VM for "default-k8s-different-port-20211118002540-20973" ...
	I1118 00:28:51.777529    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .Start
	I1118 00:28:51.777709    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Ensuring networks are active...
	I1118 00:28:51.779646    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Ensuring network default is active
	I1118 00:28:51.779924    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Ensuring network mk-default-k8s-different-port-20211118002540-20973 is active
	I1118 00:28:51.780243    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Getting domain xml...
	I1118 00:28:51.781923    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Creating domain...
	I1118 00:28:52.219858    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Waiting to get IP...
	I1118 00:28:52.220782    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:28:52.221387    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Found IP for machine: 192.168.83.2
	I1118 00:28:52.221425    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has current primary IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:28:52.221441    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Reserving static IP address...
	I1118 00:28:52.221889    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20211118002540-20973", mac: "52:54:00:39:e8:31", ip: "192.168.83.2"} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:25:54 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:28:52.221922    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | skip adding static IP to network mk-default-k8s-different-port-20211118002540-20973 - found existing host DHCP lease matching {name: "default-k8s-different-port-20211118002540-20973", mac: "52:54:00:39:e8:31", ip: "192.168.83.2"}
	I1118 00:28:52.221937    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Reserved static IP address: 192.168.83.2
	I1118 00:28:52.221951    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Waiting for SSH to be available...
	I1118 00:28:52.221964    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Getting to WaitForSSH function...
	I1118 00:28:52.227442    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:28:52.227765    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:25:54 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:28:52.227800    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:28:52.227966    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Using SSH client type: external
	I1118 00:28:52.227999    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa (-rw-------)
	I1118 00:28:52.228043    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1118 00:28:52.228064    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | About to run SSH command:
	I1118 00:28:52.228085    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | exit 0
	I1118 00:28:57.170963    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:59.172734    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:55.944257    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:55.944831    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:56.444106    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:56.444652    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:56.944387    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:56.944986    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:57.444599    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:57.445163    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:57.944388    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:57.945007    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:58.443585    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:58.444274    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:58.943824    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:58.944531    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:59.444073    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:59.444761    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:59.944332    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:28:59.944993    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:29:00.444356    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:29:00.444949    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:28:56.150808    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:28:58.647890    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:00.653805    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:01.670758    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:03.671564    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:00.944383    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:29:00.944966    8300 api_server.go:256] stopped: https://192.168.39.59:8443/healthz: Get "https://192.168.39.59:8443/healthz": dial tcp 192.168.39.59:8443: connect: connection refused
	I1118 00:29:01.444643    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:29:05.210230    8300 api_server.go:266] https://192.168.39.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1118 00:29:05.210255    8300 api_server.go:102] status: https://192.168.39.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1118 00:29:05.443677    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1118 00:29:05.443769    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1118 00:29:05.496741    8300 cri.go:76] found id: "01dc543b6678ea1fcb1bbec8afb8b18a98b6f59fa6139a06f453782c62bd0799"
	I1118 00:29:05.496765    8300 cri.go:76] found id: "e760b506ed75c15e988f1503cb1c31f03a68f338c0bb0f97186caa7177d96d38"
	I1118 00:29:05.496771    8300 cri.go:76] found id: ""
	I1118 00:29:05.496777    8300 logs.go:270] 2 containers: [01dc543b6678ea1fcb1bbec8afb8b18a98b6f59fa6139a06f453782c62bd0799 e760b506ed75c15e988f1503cb1c31f03a68f338c0bb0f97186caa7177d96d38]
	I1118 00:29:05.496821    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.501382    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.508942    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1118 00:29:05.509001    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=etcd
	I1118 00:29:05.543063    8300 cri.go:76] found id: "2186e9e127257259e18428469aa4c25b721c9347baed1931c1d50008547af725"
	I1118 00:29:05.543088    8300 cri.go:76] found id: ""
	I1118 00:29:05.543096    8300 logs.go:270] 1 containers: [2186e9e127257259e18428469aa4c25b721c9347baed1931c1d50008547af725]
	I1118 00:29:05.543139    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.549690    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1118 00:29:05.549734    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=coredns
	I1118 00:29:03.148574    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:05.149716    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:03.347192    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | SSH cmd err, output: exit status 255: 
	I1118 00:29:03.347246    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1118 00:29:03.347261    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | command : exit 0
	I1118 00:29:03.347271    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | err     : exit status 255
	I1118 00:29:03.347288    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | output  : 
	I1118 00:29:06.349397    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Getting to WaitForSSH function...
	I1118 00:29:06.355603    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.356102    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.356134    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.356408    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Using SSH client type: external
	I1118 00:29:06.356443    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa (-rw-------)
	I1118 00:29:06.356487    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1118 00:29:06.356513    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | About to run SSH command:
	I1118 00:29:06.356531    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | exit 0
	I1118 00:29:06.503556    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | SSH cmd err, output: <nil>: 
	I1118 00:29:06.503908    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetConfigRaw
	I1118 00:29:06.504742    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetIP
	I1118 00:29:06.510186    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.510557    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.510592    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.510857    8641 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/config.json ...
	I1118 00:29:06.511020    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:06.511195    8641 machine.go:88] provisioning docker machine ...
	I1118 00:29:06.511211    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:06.511401    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetMachineName
	I1118 00:29:06.511604    8641 buildroot.go:166] provisioning hostname "default-k8s-different-port-20211118002540-20973"
	I1118 00:29:06.511628    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetMachineName
	I1118 00:29:06.511819    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:06.516342    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.516713    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.516747    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.516893    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:06.517036    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:06.517174    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:06.517351    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:06.517509    8641 main.go:130] libmachine: Using SSH client type: native
	I1118 00:29:06.517701    8641 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0100] 0x7a31e0 <nil>  [] 0s} 192.168.83.2 22 <nil> <nil>}
	I1118 00:29:06.517717    8641 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211118002540-20973 && echo "default-k8s-different-port-20211118002540-20973" | sudo tee /etc/hostname
	I1118 00:29:06.168082    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:08.173021    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:05.665234    8300 cri.go:76] found id: ""
	I1118 00:29:05.665261    8300 logs.go:270] 0 containers: []
	W1118 00:29:05.665269    8300 logs.go:272] No container was found matching "coredns"
	I1118 00:29:05.665276    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1118 00:29:05.665336    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1118 00:29:05.709093    8300 cri.go:76] found id: "b7cf9e07e848da6ce923a1a815ab89238d3a339d9dbe16e66002de4ae604408d"
	I1118 00:29:05.709118    8300 cri.go:76] found id: ""
	I1118 00:29:05.709126    8300 logs.go:270] 1 containers: [b7cf9e07e848da6ce923a1a815ab89238d3a339d9dbe16e66002de4ae604408d]
	I1118 00:29:05.709182    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.713714    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1118 00:29:05.713764    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1118 00:29:05.761804    8300 cri.go:76] found id: ""
	I1118 00:29:05.761831    8300 logs.go:270] 0 containers: []
	W1118 00:29:05.761838    8300 logs.go:272] No container was found matching "kube-proxy"
	I1118 00:29:05.761846    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1118 00:29:05.761897    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1118 00:29:05.805948    8300 cri.go:76] found id: ""
	I1118 00:29:05.805986    8300 logs.go:270] 0 containers: []
	W1118 00:29:05.805994    8300 logs.go:272] No container was found matching "kubernetes-dashboard"
	I1118 00:29:05.806002    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1118 00:29:05.806059    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1118 00:29:05.851077    8300 cri.go:76] found id: ""
	I1118 00:29:05.851100    8300 logs.go:270] 0 containers: []
	W1118 00:29:05.851105    8300 logs.go:272] No container was found matching "storage-provisioner"
	I1118 00:29:05.851111    8300 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1118 00:29:05.851156    8300 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1118 00:29:05.914479    8300 cri.go:76] found id: "eb0df59bff2fb14df887c129e18f242b50e59820c37865730a3ba26989f2bc26"
	I1118 00:29:05.914507    8300 cri.go:76] found id: "82e46b1a7387f139a53ce8d67e789775dc35cc70232d45b09c223570ebbbc512"
	I1118 00:29:05.914515    8300 cri.go:76] found id: ""
	I1118 00:29:05.914523    8300 logs.go:270] 2 containers: [eb0df59bff2fb14df887c129e18f242b50e59820c37865730a3ba26989f2bc26 82e46b1a7387f139a53ce8d67e789775dc35cc70232d45b09c223570ebbbc512]
	I1118 00:29:05.914578    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.927051    8300 ssh_runner.go:152] Run: which crictl
	I1118 00:29:05.946673    8300 logs.go:123] Gathering logs for kube-apiserver [01dc543b6678ea1fcb1bbec8afb8b18a98b6f59fa6139a06f453782c62bd0799] ...
	I1118 00:29:05.946699    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01dc543b6678ea1fcb1bbec8afb8b18a98b6f59fa6139a06f453782c62bd0799"
	I1118 00:29:06.022578    8300 logs.go:123] Gathering logs for etcd [2186e9e127257259e18428469aa4c25b721c9347baed1931c1d50008547af725] ...
	I1118 00:29:06.022615    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2186e9e127257259e18428469aa4c25b721c9347baed1931c1d50008547af725"
	I1118 00:29:06.077621    8300 logs.go:123] Gathering logs for kube-controller-manager [eb0df59bff2fb14df887c129e18f242b50e59820c37865730a3ba26989f2bc26] ...
	I1118 00:29:06.077652    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb0df59bff2fb14df887c129e18f242b50e59820c37865730a3ba26989f2bc26"
	I1118 00:29:06.122141    8300 logs.go:123] Gathering logs for kube-controller-manager [82e46b1a7387f139a53ce8d67e789775dc35cc70232d45b09c223570ebbbc512] ...
	I1118 00:29:06.122168    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e46b1a7387f139a53ce8d67e789775dc35cc70232d45b09c223570ebbbc512"
	I1118 00:29:06.167767    8300 logs.go:123] Gathering logs for containerd ...
	I1118 00:29:06.167801    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1118 00:29:06.221159    8300 logs.go:123] Gathering logs for kubelet ...
	I1118 00:29:06.221215    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1118 00:29:06.262043    8300 logs.go:138] Found kubelet problem: Nov 18 00:28:44 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:28:44.085150    2315 pod_workers.go:190] Error syncing pod e265058d564eb01eef81974df1bd5490 ("kube-apiserver-old-k8s-version-20211118002250-20973_kube-system(e265058d564eb01eef81974df1bd5490)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-old-k8s-version-20211118002250-20973_kube-system(e265058d564eb01eef81974df1bd5490)"
	W1118 00:29:06.263334    8300 logs.go:138] Found kubelet problem: Nov 18 00:28:44 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:28:44.107816    2315 pod_workers.go:190] Error syncing pod 8cf10e0a78f1d68b8b985c6b1bf0e34b ("kube-controller-manager-old-k8s-version-20211118002250-20973_kube-system(8cf10e0a78f1d68b8b985c6b1bf0e34b)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3139906139 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists"
	W1118 00:29:06.268356    8300 logs.go:138] Found kubelet problem: Nov 18 00:28:45 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:28:45.093836    2315 pod_workers.go:190] Error syncing pod 8cf10e0a78f1d68b8b985c6b1bf0e34b ("kube-controller-manager-old-k8s-version-20211118002250-20973_kube-system(8cf10e0a78f1d68b8b985c6b1bf0e34b)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-20211118002250-20973_kube-system(8cf10e0a78f1d68b8b985c6b1bf0e34b)"
	W1118 00:29:06.270904    8300 logs.go:138] Found kubelet problem: Nov 18 00:28:46 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:28:46.295004    2315 pod_workers.go:190] Error syncing pod e265058d564eb01eef81974df1bd5490 ("kube-apiserver-old-k8s-version-20211118002250-20973_kube-system(e265058d564eb01eef81974df1bd5490)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-old-k8s-version-20211118002250-20973_kube-system(e265058d564eb01eef81974df1bd5490)"
	W1118 00:29:06.306580    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.285607    2315 reflector.go:126] object-"kube-system"/"storage-provisioner-token-4thwm": Failed to list *v1.Secret: secrets "storage-provisioner-token-4thwm" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.306770    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.313948    2315 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.306952    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.322254    2315 reflector.go:126] object-"default"/"default-token-w5p9q": Failed to list *v1.Secret: secrets "default-token-w5p9q" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.307359    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.324774    2315 reflector.go:126] object-"kube-system"/"kube-proxy-token-tl9k2": Failed to list *v1.Secret: secrets "kube-proxy-token-tl9k2" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.307535    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.324908    2315 reflector.go:126] object-"kube-system"/"coredns-token-vr5gq": Failed to list *v1.Secret: secrets "coredns-token-vr5gq" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.307706    8300 logs.go:138] Found kubelet problem: Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.325028    2315 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	I1118 00:29:06.310003    8300 logs.go:123] Gathering logs for describe nodes ...
	I1118 00:29:06.310023    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1118 00:29:06.683277    8300 logs.go:123] Gathering logs for kube-scheduler [b7cf9e07e848da6ce923a1a815ab89238d3a339d9dbe16e66002de4ae604408d] ...
	I1118 00:29:06.683309    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7cf9e07e848da6ce923a1a815ab89238d3a339d9dbe16e66002de4ae604408d"
	I1118 00:29:06.785775    8300 logs.go:123] Gathering logs for container status ...
	I1118 00:29:06.785810    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1118 00:29:06.834800    8300 logs.go:123] Gathering logs for dmesg ...
	I1118 00:29:06.834842    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1118 00:29:06.854630    8300 logs.go:123] Gathering logs for kube-apiserver [e760b506ed75c15e988f1503cb1c31f03a68f338c0bb0f97186caa7177d96d38] ...
	I1118 00:29:06.854656    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760b506ed75c15e988f1503cb1c31f03a68f338c0bb0f97186caa7177d96d38"
	I1118 00:29:06.907591    8300 out.go:310] Setting ErrFile to fd 2...
	I1118 00:29:06.907618    8300 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W1118 00:29:06.907724    8300 out.go:241] X Problems detected in kubelet:
	W1118 00:29:06.907740    8300 out.go:241]   Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.313948    2315 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.907752    8300 out.go:241]   Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.322254    2315 reflector.go:126] object-"default"/"default-token-w5p9q": Failed to list *v1.Secret: secrets "default-token-w5p9q" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.907764    8300 out.go:241]   Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.324774    2315 reflector.go:126] object-"kube-system"/"kube-proxy-token-tl9k2": Failed to list *v1.Secret: secrets "kube-proxy-token-tl9k2" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.907779    8300 out.go:241]   Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.324908    2315 reflector.go:126] object-"kube-system"/"coredns-token-vr5gq": Failed to list *v1.Secret: secrets "coredns-token-vr5gq" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	W1118 00:29:06.907792    8300 out.go:241]   Nov 18 00:29:05 old-k8s-version-20211118002250-20973 kubelet[2315]: E1118 00:29:05.325028    2315 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-20211118002250-20973" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-20211118002250-20973" and this object
	I1118 00:29:06.907802    8300 out.go:310] Setting ErrFile to fd 2...
	I1118 00:29:06.907809    8300 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:29:07.651089    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:10.150817    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:06.673361    8641 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211118002540-20973
	
	I1118 00:29:06.673391    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:06.679369    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.679738    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.679771    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.679940    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:06.680166    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:06.680358    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:06.680570    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:06.680759    8641 main.go:130] libmachine: Using SSH client type: native
	I1118 00:29:06.680935    8641 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0100] 0x7a31e0 <nil>  [] 0s} 192.168.83.2 22 <nil> <nil>}
	I1118 00:29:06.680963    8641 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211118002540-20973' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211118002540-20973/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211118002540-20973' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1118 00:29:06.827698    8641 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1118 00:29:06.827729    8641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert
s/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube}
	I1118 00:29:06.827774    8641 buildroot.go:174] setting up certificates
	I1118 00:29:06.827790    8641 provision.go:83] configureAuth start
	I1118 00:29:06.827807    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetMachineName
	I1118 00:29:06.828068    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetIP
	I1118 00:29:06.834271    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.834674    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.834711    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.834812    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:06.839860    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.840183    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:06.840218    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:06.840327    8641 provision.go:138] copyHostCerts
	I1118 00:29:06.840379    8641 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem, removing ...
	I1118 00:29:06.840393    8641 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem
	I1118 00:29:06.840451    8641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/key.pem (1675 bytes)
	I1118 00:29:06.840589    8641 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem, removing ...
	I1118 00:29:06.840600    8641 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem
	I1118 00:29:06.840623    8641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.pem (1078 bytes)
	I1118 00:29:06.840684    8641 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem, removing ...
	I1118 00:29:06.840691    8641 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem
	I1118 00:29:06.840716    8641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cert.pem (1123 bytes)
	I1118 00:29:06.840761    8641 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211118002540-20973 san=[192.168.83.2 192.168.83.2 localhost 127.0.0.1 minikube default-k8s-different-port-20211118002540-20973]
	I1118 00:29:07.363768    8641 provision.go:172] copyRemoteCerts
	I1118 00:29:07.363872    8641 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1118 00:29:07.363896    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:07.369687    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.370073    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.370102    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.370289    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:07.370536    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.370721    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:07.370891    8641 sshutil.go:53] new ssh client: &{IP:192.168.83.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa Username:docker}
	I1118 00:29:07.475533    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1118 00:29:07.505310    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I1118 00:29:07.532746    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1118 00:29:07.562070    8641 provision.go:86] duration metric: configureAuth took 734.263615ms
	I1118 00:29:07.562098    8641 buildroot.go:189] setting minikube options for container-runtime
	I1118 00:29:07.562368    8641 config.go:176] Loaded profile config "default-k8s-different-port-20211118002540-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1118 00:29:07.562389    8641 machine.go:91] provisioned docker machine in 1.051181027s
	I1118 00:29:07.562399    8641 start.go:267] post-start starting for "default-k8s-different-port-20211118002540-20973" (driver="kvm2")
	I1118 00:29:07.562407    8641 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1118 00:29:07.562437    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.562763    8641 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1118 00:29:07.562802    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:07.568730    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.569142    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.569184    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.569313    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:07.569527    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.569685    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:07.569837    8641 sshutil.go:53] new ssh client: &{IP:192.168.83.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa Username:docker}
	I1118 00:29:07.674628    8641 ssh_runner.go:152] Run: cat /etc/os-release
	I1118 00:29:07.680530    8641 info.go:137] Remote host: Buildroot 2021.02.4
	I1118 00:29:07.680558    8641 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/addons for local assets ...
	I1118 00:29:07.680623    8641 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files for local assets ...
	I1118 00:29:07.680710    8641 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/209732.pem -> 209732.pem in /etc/ssl/certs
	I1118 00:29:07.680815    8641 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I1118 00:29:07.693049    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/209732.pem --> /etc/ssl/certs/209732.pem (1708 bytes)
	I1118 00:29:07.725580    8641 start.go:270] post-start completed in 163.162798ms
	I1118 00:29:07.725632    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.725908    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:07.731712    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.732113    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.732151    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.732406    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:07.732617    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.732799    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.732961    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:07.733173    8641 main.go:130] libmachine: Using SSH client type: native
	I1118 00:29:07.733331    8641 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0100] 0x7a31e0 <nil>  [] 0s} 192.168.83.2 22 <nil> <nil>}
	I1118 00:29:07.733345    8641 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1118 00:29:07.877451    8641 main.go:130] libmachine: SSH cmd err, output: <nil>: 1637195347.826381261
	
	I1118 00:29:07.877480    8641 fix.go:212] guest clock: 1637195347.826381261
	I1118 00:29:07.877489    8641 fix.go:225] Guest: 2021-11-18 00:29:07.826381261 +0000 UTC Remote: 2021-11-18 00:29:07.725884427 +0000 UTC m=+16.135770343 (delta=100.496834ms)
	I1118 00:29:07.877516    8641 fix.go:196] guest clock delta is within tolerance: 100.496834ms
	I1118 00:29:07.877522    8641 fix.go:57] fixHost completed within 16.115912245s
	I1118 00:29:07.877529    8641 start.go:80] releasing machines lock for "default-k8s-different-port-20211118002540-20973", held for 16.115930936s
	I1118 00:29:07.877580    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.877921    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetIP
	I1118 00:29:07.883741    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.884091    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.884121    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.884339    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.884551    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.885047    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .DriverName
	I1118 00:29:07.885308    8641 ssh_runner.go:152] Run: systemctl --version
	I1118 00:29:07.885339    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:07.885358    8641 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1118 00:29:07.885403    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHHostname
	I1118 00:29:07.892323    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.892356    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.892649    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.892678    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.892708    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:07.892724    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:07.892987    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:07.892988    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHPort
	I1118 00:29:07.893230    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.893241    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHKeyPath
	I1118 00:29:07.893419    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:07.893428    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetSSHUsername
	I1118 00:29:07.893647    8641 sshutil.go:53] new ssh client: &{IP:192.168.83.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa Username:docker}
	I1118 00:29:07.893697    8641 sshutil.go:53] new ssh client: &{IP:192.168.83.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/default-k8s-different-port-20211118002540-20973/id_rsa Username:docker}
	I1118 00:29:07.999322    8641 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime containerd
	I1118 00:29:07.999449    8641 ssh_runner.go:152] Run: sudo crictl images --output json
	I1118 00:29:10.672622    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:13.175827    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:12.651400    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:14.655290    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:12.033085    8641 ssh_runner.go:192] Completed: sudo crictl images --output json: (4.033606311s)
	I1118 00:29:12.033255    8641 containerd.go:631] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.3". assuming images are not preloaded.
	I1118 00:29:12.033315    8641 ssh_runner.go:152] Run: which lz4
	I1118 00:29:12.037603    8641 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1118 00:29:12.042322    8641 ssh_runner.go:309] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1118 00:29:12.042349    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (596404090 bytes)
	I1118 00:29:13.870721    8641 containerd.go:568] Took 1.833152 seconds to copy over tarball
	I1118 00:29:13.870792    8641 ssh_runner.go:152] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1118 00:29:16.909088    8300 api_server.go:240] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I1118 00:29:16.917083    8300 api_server.go:266] https://192.168.39.59:8443/healthz returned 200:
	ok
	I1118 00:29:16.926470    8300 api_server.go:140] control plane version: v1.14.0
	I1118 00:29:16.926490    8300 api_server.go:130] duration metric: took 1m14.483375701s to wait for apiserver health ...
	I1118 00:29:16.926500    8300 cni.go:93] Creating CNI manager for ""
	I1118 00:29:16.926516    8300 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1118 00:29:15.672336    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:18.172121    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:16.928654    8300 out.go:176] * Configuring bridge CNI (Container Networking Interface) ...
	I1118 00:29:16.928711    8300 ssh_runner.go:152] Run: sudo mkdir -p /etc/cni/net.d
	I1118 00:29:16.946564    8300 ssh_runner.go:319] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1118 00:29:16.980937    8300 system_pods.go:43] waiting for kube-system pods to appear ...
	I1118 00:29:16.995677    8300 system_pods.go:59] 7 kube-system pods found
	I1118 00:29:16.995712    8300 system_pods.go:61] "coredns-fb8b8dccf-vr6xn" [d61b4978-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995718    8300 system_pods.go:61] "etcd-old-k8s-version-20211118002250-20973" [f200ac6b-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995722    8300 system_pods.go:61] "kube-apiserver-old-k8s-version-20211118002250-20973" [f9bf5828-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995726    8300 system_pods.go:61] "kube-controller-manager-old-k8s-version-20211118002250-20973" [f62b6e39-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995733    8300 system_pods.go:61] "kube-proxy-57jtv" [d6544802-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995738    8300 system_pods.go:61] "kube-scheduler-old-k8s-version-20211118002250-20973" [f3c92494-4805-11ec-8e44-52540009518b] Running
	I1118 00:29:16.995747    8300 system_pods.go:61] "storage-provisioner" [d762dcd4-4805-11ec-8e44-52540009518b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1118 00:29:16.995764    8300 system_pods.go:74] duration metric: took 14.802059ms to wait for pod list to return data ...
	I1118 00:29:16.995779    8300 node_conditions.go:102] verifying NodePressure condition ...
	I1118 00:29:17.002024    8300 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1118 00:29:17.002061    8300 node_conditions.go:123] node cpu capacity is 2
	I1118 00:29:17.002076    8300 node_conditions.go:105] duration metric: took 6.289032ms to run NodePressure ...
	I1118 00:29:17.002095    8300 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.14.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:17.254832    8300 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1118 00:29:17.261344    8300 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I1118 00:29:17.631735    8300 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I1118 00:29:18.077174    8300 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I1118 00:29:18.611711    8300 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I1118 00:29:17.147090    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:19.147851    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:21.351130    8641 ssh_runner.go:192] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.480307871s)
	I1118 00:29:21.351165    8641 containerd.go:575] Took 7.480415 seconds t extract the tarball
	I1118 00:29:21.351178    8641 ssh_runner.go:103] rm: /preloaded.tar.lz4
	I1118 00:29:21.408875    8641 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1118 00:29:21.560152    8641 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1118 00:29:21.537115    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:21.357483    8300 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I1118 00:29:21.362088    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:21.611885    8641 ssh_runner.go:152] Run: sudo systemctl stop -f crio
	I1118 00:29:24.354047    8641 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I1118 00:29:24.371904    8641 docker.go:156] disabling docker service ...
	I1118 00:29:24.371968    8641 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
	I1118 00:29:24.389263    8641 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
	I1118 00:29:24.405347    8641 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
	I1118 00:29:24.553631    8641 ssh_runner.go:152] Run: sudo systemctl mask docker.service
	I1118 00:29:24.697918    8641 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
	I1118 00:29:24.716856    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1118 00:29:24.742317    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLnNlcnZpY2UudjEuZGlmZi1zZXJ2aWNlIl0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdjLnYxLnNjaGVkdWxlciJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNja
GVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1118 00:29:24.766784    8641 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1118 00:29:24.780546    8641 crio.go:138] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1118 00:29:24.780602    8641 ssh_runner.go:152] Run: sudo modprobe br_netfilter
	I1118 00:29:24.800288    8641 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1118 00:29:24.811862    8641 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I1118 00:29:24.954298    8641 ssh_runner.go:152] Run: sudo systemctl restart containerd
	I1118 00:29:24.984530    8641 start.go:403] Will wait 60s for socket path /run/containerd/containerd.sock
	I1118 00:29:24.984619    8641 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1118 00:29:24.990082    8641 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1118 00:29:26.094930    8641 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
	I1118 00:29:26.100874    8641 start.go:424] Will wait 60s for crictl version
	I1118 00:29:26.100933    8641 ssh_runner.go:152] Run: sudo crictl version
	I1118 00:29:26.138706    8641 start.go:433] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I1118 00:29:26.138819    8641 ssh_runner.go:152] Run: containerd --version
	I1118 00:29:26.167627    8641 ssh_runner.go:152] Run: containerd --version
	I1118 00:29:26.195344    8641 out.go:176] * Preparing Kubernetes v1.22.3 on containerd 1.4.9 ...
	I1118 00:29:26.195419    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) Calling .GetIP
	I1118 00:29:26.201123    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:26.201447    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:e8:31", ip: ""} in network mk-default-k8s-different-port-20211118002540-20973: {Iface:virbr5 ExpiryTime:2021-11-18 01:29:01 +0000 UTC Type:0 Mac:52:54:00:39:e8:31 Iaid: IPaddr:192.168.83.2 Prefix:24 Hostname:default-k8s-different-port-20211118002540-20973 Clientid:01:52:54:00:39:e8:31}
	I1118 00:29:26.201483    8641 main.go:130] libmachine: (default-k8s-different-port-20211118002540-20973) DBG | domain default-k8s-different-port-20211118002540-20973 has defined IP address 192.168.83.2 and MAC address 52:54:00:39:e8:31 in network mk-default-k8s-different-port-20211118002540-20973
	I1118 00:29:26.201620    8641 ssh_runner.go:152] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1118 00:29:26.205902    8641 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1118 00:29:26.220554    8641 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime containerd
	I1118 00:29:26.220604    8641 ssh_runner.go:152] Run: sudo crictl images --output json
	I1118 00:29:26.256452    8641 containerd.go:635] all images are preloaded for containerd runtime.
	I1118 00:29:26.256478    8641 containerd.go:539] Images already preloaded, skipping extraction
	I1118 00:29:26.256531    8641 ssh_runner.go:152] Run: sudo crictl images --output json
	I1118 00:29:26.300240    8641 containerd.go:635] all images are preloaded for containerd runtime.
	I1118 00:29:26.300266    8641 cache_images.go:79] Images are preloaded, skipping loading
	I1118 00:29:26.300349    8641 ssh_runner.go:152] Run: sudo crictl info
	I1118 00:29:26.335817    8641 cni.go:93] Creating CNI manager for ""
	I1118 00:29:26.335844    8641 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1118 00:29:26.335860    8641 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1118 00:29:26.335923    8641 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.2 APIServerPort:8444 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211118002540-20973 NodeName:default-k8s-different-port-20211118002540-20973 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.83
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1118 00:29:26.336082    8641 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211118002540-20973"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1118 00:29:26.336185    8641 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20211118002540-20973 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211118002540-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1118 00:29:26.336242    8641 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
	I1118 00:29:26.350674    8641 binaries.go:44] Found k8s binaries, skipping transfer
	I1118 00:29:26.350736    8641 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1118 00:29:26.364447    8641 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I1118 00:29:26.387733    8641 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1118 00:29:26.409621    8641 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1118 00:29:26.431492    8641 ssh_runner.go:152] Run: grep 192.168.83.2	control-plane.minikube.internal$ /etc/hosts
	I1118 00:29:26.435471    8641 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1118 00:29:26.448396    8641 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973 for IP: 192.168.83.2
	I1118 00:29:26.448506    8641 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key
	I1118 00:29:26.448548    8641 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key
	I1118 00:29:26.448611    8641 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.key
	I1118 00:29:26.448663    8641 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/apiserver.key.52de6010
	I1118 00:29:26.448697    8641 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/proxy-client.key
	I1118 00:29:26.448778    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/20973.pem (1338 bytes)
	W1118 00:29:26.448804    8641 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/20973_empty.pem, impossibly tiny 0 bytes
	I1118 00:29:26.448814    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca-key.pem (1675 bytes)
	I1118 00:29:26.448842    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/ca.pem (1078 bytes)
	I1118 00:29:26.448866    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/cert.pem (1123 bytes)
	I1118 00:29:26.448900    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/key.pem (1675 bytes)
	I1118 00:29:26.448954    8641 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/209732.pem (1708 bytes)
	I1118 00:29:26.449861    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1118 00:29:26.477492    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1118 00:29:26.505602    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1118 00:29:26.536702    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1118 00:29:26.567558    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1118 00:29:26.669246    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:29.169228    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:25.821715    8300 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I1118 00:29:26.902961    8300 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I1118 00:29:28.778071    8300 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I1118 00:29:26.648871    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:28.651772    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:26.597287    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1118 00:29:26.627802    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1118 00:29:26.657754    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1118 00:29:26.688507    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1118 00:29:26.719635    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/certs/20973.pem --> /usr/share/ca-certificates/20973.pem (1338 bytes)
	I1118 00:29:26.748026    8641 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/ssl/certs/209732.pem --> /usr/share/ca-certificates/209732.pem (1708 bytes)
	I1118 00:29:26.777066    8641 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1118 00:29:26.798736    8641 ssh_runner.go:152] Run: openssl version
	I1118 00:29:26.804399    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1118 00:29:26.816949    8641 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1118 00:29:26.822104    8641 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 17 23:35 /usr/share/ca-certificates/minikubeCA.pem
	I1118 00:29:26.822150    8641 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1118 00:29:26.828642    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1118 00:29:26.840793    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20973.pem && ln -fs /usr/share/ca-certificates/20973.pem /etc/ssl/certs/20973.pem"
	I1118 00:29:26.853877    8641 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/20973.pem
	I1118 00:29:26.858479    8641 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 17 23:42 /usr/share/ca-certificates/20973.pem
	I1118 00:29:26.858524    8641 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20973.pem
	I1118 00:29:26.864377    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20973.pem /etc/ssl/certs/51391683.0"
	I1118 00:29:26.876699    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/209732.pem && ln -fs /usr/share/ca-certificates/209732.pem /etc/ssl/certs/209732.pem"
	I1118 00:29:26.889008    8641 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/209732.pem
	I1118 00:29:26.893694    8641 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 17 23:42 /usr/share/ca-certificates/209732.pem
	I1118 00:29:26.893735    8641 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/209732.pem
	I1118 00:29:26.899792    8641 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/209732.pem /etc/ssl/certs/3ec20f2e.0"
	I1118 00:29:26.913182    8641 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20211118002540-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-po
rt-20211118002540-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.83.2 Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1118 00:29:26.913295    8641 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1118 00:29:26.913357    8641 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1118 00:29:26.949633    8641 cri.go:76] found id: ""
	I1118 00:29:26.949698    8641 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1118 00:29:26.962042    8641 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1118 00:29:26.962064    8641 kubeadm.go:600] restartCluster start
	I1118 00:29:26.962106    8641 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I1118 00:29:26.974232    8641 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:26.975171    8641 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211118002540-20973" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1118 00:29:26.975589    8641 kubeconfig.go:127] "default-k8s-different-port-20211118002540-20973" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig - will repair!
	I1118 00:29:26.976287    8641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig: {Name:mk4bc2bc72dc43be7e3142c29995aa6eeea09f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1118 00:29:26.978543    8641 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1118 00:29:26.989971    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:26.990017    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:27.003478    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:27.203815    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:27.203895    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:27.218761    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:27.404052    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:27.404127    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:27.418600    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:27.603954    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:27.604034    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:27.619646    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:27.803947    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:27.804027    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:27.819508    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:28.003745    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:28.003836    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:28.017507    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:28.203616    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:28.203693    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:28.219100    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:28.404335    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:28.404421    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:28.419934    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:28.604183    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:28.604268    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:28.619564    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:28.803749    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:28.803826    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:28.819556    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:29.003830    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:29.003909    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:29.019504    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:29.203870    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:29.203935    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:29.219071    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:29.404448    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:29.404525    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:29.418561    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:29.603758    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:29.603858    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:29.619966    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:29.804229    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:29.804322    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:29.818557    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:30.003630    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:30.003719    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:30.016786    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:30.016805    8641 api_server.go:165] Checking apiserver status ...
	I1118 00:29:30.016847    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1118 00:29:30.030529    8641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1118 00:29:30.030552    8641 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I1118 00:29:30.030559    8641 kubeadm.go:1032] stopping kube-system containers ...
	I1118 00:29:30.030573    8641 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1118 00:29:30.030623    8641 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1118 00:29:30.066346    8641 cri.go:76] found id: ""
	I1118 00:29:30.066401    8641 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I1118 00:29:30.084608    8641 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1118 00:29:30.098217    8641 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1118 00:29:30.098272    8641 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1118 00:29:30.111261    8641 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1118 00:29:30.111280    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:30.330667    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:31.215853    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:31.460539    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:31.570822    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:29:31.170745    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:33.171631    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:31.333619    8300 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I1118 00:29:31.147975    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:33.648622    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:31.677127    8641 api_server.go:51] waiting for apiserver process to appear ...
	I1118 00:29:31.677195    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:32.195420    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:32.694748    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:33.194733    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:33.695553    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:34.195475    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:34.695009    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:35.194721    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:35.695062    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:36.194797    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:35.668151    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:37.673983    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:40.173480    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:36.472607    8300 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I1118 00:29:36.147412    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:38.150741    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:40.646019    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:36.695537    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:37.195328    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:37.694798    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:38.195601    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:38.694970    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:39.194898    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:39.695751    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:40.195086    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:40.695779    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:41.195084    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:42.669481    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:44.674440    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:42.648141    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:44.652073    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:41.694762    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:42.194989    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:42.694772    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:43.194823    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:43.694890    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:44.194761    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:44.694925    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:45.195593    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:45.694807    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:46.195536    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:47.171321    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:49.171598    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:46.235672    8300 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I1118 00:29:47.149035    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:49.150560    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:46.695500    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:47.194752    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:47.694905    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:48.195617    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:48.695408    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:49.194987    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:49.694785    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:50.195653    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:50.694907    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:51.195360    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:51.670174    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:54.168714    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:51.151403    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:53.645813    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:55.647541    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:51.694806    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:52.195668    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:52.695400    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:53.194936    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:53.695712    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:54.195486    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:54.695343    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:55.194829    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:55.694996    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:56.194782    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:56.169689    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:58.670456    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:57.648163    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:00.150149    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:29:56.694946    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:57.194799    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:57.694812    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:58.195637    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:58.695331    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:59.194967    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:29:59.695391    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:30:00.195586    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:30:00.695031    8641 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:30:00.712879    8641 api_server.go:71] duration metric: took 29.035750444s to wait for apiserver process to appear ...
	I1118 00:30:00.712908    8641 api_server.go:87] waiting for apiserver healthz status ...
	I1118 00:30:00.712919    8641 api_server.go:240] Checking apiserver healthz at https://192.168.83.2:8444/healthz ...
	I1118 00:30:00.672508    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:03.172078    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:05.180742    8300 kubeadm.go:746] kubelet initialised
	I1118 00:30:05.180771    8300 kubeadm.go:747] duration metric: took 47.92590417s waiting for restarted kubelet to initialise ...
	I1118 00:30:05.180782    8300 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:30:05.189764    8300 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-tq2km" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.200955    8300 pod_ready.go:92] pod "coredns-fb8b8dccf-tq2km" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.200977    8300 pod_ready.go:81] duration metric: took 11.184365ms waiting for pod "coredns-fb8b8dccf-tq2km" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.200988    8300 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-vr6xn" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.207473    8300 pod_ready.go:92] pod "coredns-fb8b8dccf-vr6xn" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.207492    8300 pod_ready.go:81] duration metric: took 6.49607ms waiting for pod "coredns-fb8b8dccf-vr6xn" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.207503    8300 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.213774    8300 pod_ready.go:92] pod "etcd-old-k8s-version-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.213795    8300 pod_ready.go:81] duration metric: took 6.278536ms waiting for pod "etcd-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.213805    8300 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.223886    8300 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.223904    8300 pod_ready.go:81] duration metric: took 10.090338ms waiting for pod "kube-apiserver-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.223915    8300 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.578043    8300 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.578065    8300 pod_ready.go:81] duration metric: took 354.142183ms waiting for pod "kube-controller-manager-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.578077    8300 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-57jtv" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:02.152313    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:04.650043    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:04.626550    8641 api_server.go:266] https://192.168.83.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1118 00:30:04.626583    8641 api_server.go:102] status: https://192.168.83.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1118 00:30:05.126823    8641 api_server.go:240] Checking apiserver healthz at https://192.168.83.2:8444/healthz ...
	I1118 00:30:05.153686    8641 api_server.go:266] https://192.168.83.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1118 00:30:05.153717    8641 api_server.go:102] status: https://192.168.83.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1118 00:30:05.627307    8641 api_server.go:240] Checking apiserver healthz at https://192.168.83.2:8444/healthz ...
	I1118 00:30:05.634804    8641 api_server.go:266] https://192.168.83.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1118 00:30:05.634825    8641 api_server.go:102] status: https://192.168.83.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1118 00:30:06.127381    8641 api_server.go:240] Checking apiserver healthz at https://192.168.83.2:8444/healthz ...
	I1118 00:30:06.133540    8641 api_server.go:266] https://192.168.83.2:8444/healthz returned 200:
	ok
	I1118 00:30:06.140846    8641 api_server.go:140] control plane version: v1.22.3
	I1118 00:30:06.140865    8641 api_server.go:130] duration metric: took 5.427952042s to wait for apiserver health ...
	I1118 00:30:06.140876    8641 cni.go:93] Creating CNI manager for ""
	I1118 00:30:06.140882    8641 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1118 00:30:06.142981    8641 out.go:176] * Configuring bridge CNI (Container Networking Interface) ...
	I1118 00:30:06.143047    8641 ssh_runner.go:152] Run: sudo mkdir -p /etc/cni/net.d
	I1118 00:30:06.158815    8641 ssh_runner.go:319] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1118 00:30:06.190660    8641 system_pods.go:43] waiting for kube-system pods to appear ...
	I1118 00:30:06.204413    8641 system_pods.go:59] 8 kube-system pods found
	I1118 00:30:06.204437    8641 system_pods.go:61] "coredns-78fcd69978-w7f84" [69eb22fb-1eac-462e-b9b3-d033acc236e6] Running
	I1118 00:30:06.204442    8641 system_pods.go:61] "etcd-default-k8s-different-port-20211118002540-20973" [a866d364-933c-483c-b6d5-e3c3f25954c4] Running
	I1118 00:30:06.204448    8641 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20211118002540-20973" [a83d7d87-d4e9-4eeb-b758-b3235d81f3f6] Running
	I1118 00:30:06.204452    8641 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20211118002540-20973" [efe1e3d8-5312-423c-a06a-405d4eaa28cd] Running
	I1118 00:30:06.204455    8641 system_pods.go:61] "kube-proxy-9h9t2" [b2ebaeb1-51f8-4ce8-9d72-3faa574d2edc] Running
	I1118 00:30:06.204459    8641 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20211118002540-20973" [4a18c60a-834e-4dd2-9fbe-498cc9e0658b] Running
	I1118 00:30:06.204465    8641 system_pods.go:61] "metrics-server-7c784ccb57-wtbjh" [50eaa921-3bef-4771-b606-70b73ecf757d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1118 00:30:06.204471    8641 system_pods.go:61] "storage-provisioner" [ccc96acf-f005-4665-96cf-5934d50c7fff] Running
	I1118 00:30:06.204476    8641 system_pods.go:74] duration metric: took 13.798129ms to wait for pod list to return data ...
	I1118 00:30:06.204482    8641 node_conditions.go:102] verifying NodePressure condition ...
	I1118 00:30:06.209454    8641 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1118 00:30:06.209504    8641 node_conditions.go:123] node cpu capacity is 2
	I1118 00:30:06.209516    8641 node_conditions.go:105] duration metric: took 5.028571ms to run NodePressure ...
	I1118 00:30:06.209534    8641 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1118 00:30:06.576681    8641 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1118 00:30:06.583420    8641 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I1118 00:30:05.669359    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:07.673747    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:10.173635    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:05.980212    8300 pod_ready.go:92] pod "kube-proxy-57jtv" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:05.980232    8300 pod_ready.go:81] duration metric: took 402.148058ms waiting for pod "kube-proxy-57jtv" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:05.980242    8300 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:06.379377    8300 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:06.379403    8300 pod_ready.go:81] duration metric: took 399.153284ms waiting for pod "kube-scheduler-old-k8s-version-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:06.379418    8300 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:08.787162    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:07.148434    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:09.649574    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:06.951882    8641 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I1118 00:30:07.395838    8641 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I1118 00:30:07.931747    8641 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I1118 00:30:08.719598    8641 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I1118 00:30:10.234112    8641 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I1118 00:30:11.314246    8641 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I1118 00:30:12.671183    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:14.673304    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:10.787665    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:13.287892    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:15.288347    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:12.147315    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:14.153261    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:13.190567    8641 kubeadm.go:746] kubelet initialised
	I1118 00:30:13.190596    8641 kubeadm.go:747] duration metric: took 6.613870641s waiting for restarted kubelet to initialise ...
	I1118 00:30:13.190605    8641 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:30:13.197310    8641 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-w7f84" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:13.219601    8641 pod_ready.go:92] pod "coredns-78fcd69978-w7f84" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:13.219632    8641 pod_ready.go:81] duration metric: took 22.290317ms waiting for pod "coredns-78fcd69978-w7f84" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:13.219646    8641 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:13.225972    8641 pod_ready.go:92] pod "etcd-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:13.225997    8641 pod_ready.go:81] duration metric: took 6.341399ms waiting for pod "etcd-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:13.226011    8641 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:15.246422    8641 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:17.173745    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:19.669878    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:17.288642    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:19.785706    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:16.647199    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:19.147574    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:17.742120    8641 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:20.242581    8641 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:21.673800    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:24.169637    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:21.789298    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:24.287726    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:21.647028    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:23.651794    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:21.742245    8641 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:21.742270    8641 pod_ready.go:81] duration metric: took 8.516249466s waiting for pod "kube-apiserver-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:21.742284    8641 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:23.758870    8641 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:25.256626    8641 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:25.256656    8641 pod_ready.go:81] duration metric: took 3.514361808s waiting for pod "kube-controller-manager-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:25.256670    8641 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9h9t2" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:26.172027    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:28.676041    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:26.289591    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:28.786593    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:26.149384    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:28.648281    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:27.280832    8641 pod_ready.go:102] pod "kube-proxy-9h9t2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:29.775445    8641 pod_ready.go:102] pod "kube-proxy-9h9t2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:30.276194    8641 pod_ready.go:92] pod "kube-proxy-9h9t2" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:30.276230    8641 pod_ready.go:81] duration metric: took 5.019550936s waiting for pod "kube-proxy-9h9t2" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:30.276244    8641 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:30.283140    8641 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:30:30.283176    8641 pod_ready.go:81] duration metric: took 6.921767ms waiting for pod "kube-scheduler-default-k8s-different-port-20211118002540-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:30.283190    8641 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace to be "Ready" ...
	I1118 00:30:31.173403    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:33.672618    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:30.787488    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:33.287263    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:35.290336    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:31.147171    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:33.147930    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:35.645550    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:32.300449    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:34.304269    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:36.170670    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:38.171299    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:40.171886    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:37.787728    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:40.288020    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:37.647950    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:39.648188    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:36.801450    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:39.303883    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:42.173505    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:44.681403    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:42.788740    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:45.285630    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:42.145960    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:44.146297    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:41.801264    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:44.306837    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:47.170024    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:49.669561    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:47.287339    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:49.287606    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:46.146857    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:48.147355    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:50.147713    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:46.799929    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:48.800538    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:51.304503    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:52.174069    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:54.670899    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:51.288896    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:53.786673    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:52.148043    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:54.647160    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:53.801847    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:55.802447    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:57.171660    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:59.175263    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:55.787113    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:58.285174    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:00.286054    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:56.648851    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:59.146802    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:30:58.302649    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:00.303148    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:01.670624    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:03.674474    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:02.287958    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:04.288251    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:01.146880    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:03.646648    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:02.799294    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:04.807354    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:06.169165    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:08.169279    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:10.176048    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:06.786704    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:08.787036    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:06.146376    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:08.649736    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:07.299349    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:09.299875    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:11.310670    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:12.669749    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:14.670587    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:11.285927    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:13.288225    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:11.147837    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:13.646291    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:13.801613    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:16.300004    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:16.671863    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:19.171352    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:15.786450    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:18.286760    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:20.287508    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:16.146567    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:18.649920    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:18.305919    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:20.802178    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:21.680172    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:24.173372    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:22.288146    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:24.288822    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:21.145222    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:23.145551    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:25.147128    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:23.300297    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:25.801084    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:26.670730    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:29.177424    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:26.788372    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:29.287732    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:27.155095    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:29.648828    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:28.303493    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:30.799873    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:31.670717    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:33.672236    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:31.786689    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:34.285807    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:32.147469    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:34.649422    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:32.802163    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:35.299113    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:36.172751    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:38.671529    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:36.286030    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:38.287269    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:40.289305    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:37.147890    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:39.148896    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:37.304712    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:39.308263    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:41.169923    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:43.673607    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:42.786250    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:44.786704    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:41.646679    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:44.148078    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:41.801783    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:43.804363    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:46.302518    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:46.170542    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:48.672735    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:46.787702    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:49.288043    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:46.650436    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:49.147167    8006 pod_ready.go:102] pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:50.138267    8006 pod_ready.go:81] duration metric: took 4m0.332769554s waiting for pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace to be "Ready" ...
	E1118 00:31:50.138304    8006 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-jlvfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I1118 00:31:50.138325    8006 pod_ready.go:38] duration metric: took 4m5.457938611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:31:50.138363    8006 kubeadm.go:604] restartCluster took 4m31.332519576s
	W1118 00:31:50.138968    8006 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1118 00:31:50.139062    8006 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1118 00:31:48.303245    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:50.309380    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:53.231309    8006 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.092204736s)
	I1118 00:31:53.231378    8006 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1118 00:31:53.250206    8006 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1118 00:31:53.250268    8006 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1118 00:31:53.289073    8006 cri.go:76] found id: "c5d43e093ec99cc97dd5696b2a2660a5fc1ae66728dac8c72c75862fd0063b9c"
	I1118 00:31:53.289091    8006 cri.go:76] found id: ""
	W1118 00:31:53.289098    8006 kubeadm.go:840] found 1 kube-system containers to stop
	I1118 00:31:53.289104    8006 cri.go:220] Stopping containers: [c5d43e093ec99cc97dd5696b2a2660a5fc1ae66728dac8c72c75862fd0063b9c]
	I1118 00:31:53.289153    8006 ssh_runner.go:152] Run: which crictl
	I1118 00:31:53.294443    8006 ssh_runner.go:152] Run: sudo /usr/bin/crictl stop c5d43e093ec99cc97dd5696b2a2660a5fc1ae66728dac8c72c75862fd0063b9c
	I1118 00:31:53.333450    8006 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1118 00:31:53.346018    8006 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1118 00:31:53.357509    8006 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1118 00:31:53.357542    8006 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I1118 00:31:53.772741    8006 out.go:203]   - Generating certificates and keys ...
	I1118 00:31:51.171557    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:53.172631    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:51.787006    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:53.789255    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:54.771052    8006 out.go:203]   - Booting up control plane ...
	I1118 00:31:52.800785    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:55.305525    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:55.672178    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:57.675137    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:00.169576    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:56.286459    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:58.786305    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:31:57.800253    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:00.300742    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:02.171391    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:04.677196    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:01.290536    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:03.788599    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:02.306941    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:04.801047    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:07.172063    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:09.676429    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:05.789887    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:08.287487    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:10.289022    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:06.804146    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:09.299005    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:12.395471    8006 out.go:203]   - Configuring RBAC rules ...
	I1118 00:32:13.105659    8006 cni.go:93] Creating CNI manager for ""
	I1118 00:32:13.105690    8006 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1118 00:32:12.170150    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:14.175277    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:12.289120    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:14.786994    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:13.107599    8006 out.go:176] * Configuring bridge CNI (Container Networking Interface) ...
	I1118 00:32:13.107682    8006 ssh_runner.go:152] Run: sudo mkdir -p /etc/cni/net.d
	I1118 00:32:13.154950    8006 ssh_runner.go:319] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1118 00:32:13.227824    8006 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1118 00:32:13.227969    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:13.228052    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=b7b0a42f687dae576880a10f0aa2f899d9174438 minikube.k8s.io/name=no-preload-20211118002250-20973 minikube.k8s.io/updated_at=2021_11_18T00_32_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:13.559063    8006 ops.go:34] apiserver oom_adj: -16
	I1118 00:32:13.559160    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:14.162361    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:14.662325    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:15.161706    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:15.662210    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:11.800162    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:13.800654    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:15.801234    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:16.670633    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:18.674246    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:16.788525    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:19.287974    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:16.162335    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:16.662208    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:17.162299    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:17.662066    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:18.162542    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:18.661546    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:19.162394    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:19.661966    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:20.162538    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:20.661670    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:17.801828    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:20.304408    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:21.177180    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:23.674426    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:21.288396    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:23.787379    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:21.162364    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:21.662513    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:22.161557    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:22.661863    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:23.162363    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:23.662251    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:24.162550    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:24.661823    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:25.162141    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:25.662496    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:22.801658    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:24.804007    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:26.161852    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:26.661560    8006 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1118 00:32:26.905717    8006 kubeadm.go:985] duration metric: took 13.677798876s to wait for elevateKubeSystemPrivileges.
	I1118 00:32:26.905747    8006 kubeadm.go:392] StartCluster complete in 5m8.247459616s
	I1118 00:32:26.905765    8006 settings.go:142] acquiring lock: {Name:mkafbfaf35d2571ffc7ee7d797e631d4136c0aab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1118 00:32:26.905886    8006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1118 00:32:26.907869    8006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig: {Name:mk4bc2bc72dc43be7e3142c29995aa6eeea09f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1118 00:32:27.455325    8006 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20211118002250-20973" rescaled to 1
	I1118 00:32:27.455488    8006 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1118 00:32:27.455495    8006 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.50.33 Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}
	I1118 00:32:27.457380    8006 out.go:176] * Verifying Kubernetes components...
	I1118 00:32:27.455598    8006 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1118 00:32:27.455756    8006 config.go:176] Loaded profile config "no-preload-20211118002250-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.4-rc.0
	I1118 00:32:27.457529    8006 addons.go:65] Setting dashboard=true in profile "no-preload-20211118002250-20973"
	I1118 00:32:27.457544    8006 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20211118002250-20973"
	I1118 00:32:27.457562    8006 addons.go:65] Setting metrics-server=true in profile "no-preload-20211118002250-20973"
	I1118 00:32:27.457563    8006 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1118 00:32:27.457575    8006 addons.go:153] Setting addon metrics-server=true in "no-preload-20211118002250-20973"
	W1118 00:32:27.457585    8006 addons.go:165] addon metrics-server should already be in state true
	I1118 00:32:27.457619    8006 host.go:66] Checking if "no-preload-20211118002250-20973" exists ...
	I1118 00:32:27.457565    8006 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20211118002250-20973"
	W1118 00:32:27.457666    8006 addons.go:165] addon storage-provisioner should already be in state true
	I1118 00:32:27.457705    8006 host.go:66] Checking if "no-preload-20211118002250-20973" exists ...
	I1118 00:32:27.458125    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.458165    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.458188    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.457549    8006 addons.go:153] Setting addon dashboard=true in "no-preload-20211118002250-20973"
	I1118 00:32:27.458230    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	W1118 00:32:27.458233    8006 addons.go:165] addon dashboard should already be in state true
	I1118 00:32:27.458384    8006 host.go:66] Checking if "no-preload-20211118002250-20973" exists ...
	I1118 00:32:27.457547    8006 addons.go:65] Setting default-storageclass=true in profile "no-preload-20211118002250-20973"
	I1118 00:32:27.458473    8006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20211118002250-20973"
	I1118 00:32:27.458759    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.458849    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.458912    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.458948    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.477926    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1118 00:32:27.477942    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38463
	I1118 00:32:27.478512    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.478887    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.479398    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.479419    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.479555    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.479573    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.479641    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40283
	I1118 00:32:27.480372    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.480392    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.480374    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.480909    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.480927    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.481236    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.481248    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.481274    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.481297    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.481340    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.481447    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetState
	I1118 00:32:27.486298    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42183
	I1118 00:32:27.486650    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.487100    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.487116    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.487471    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.488228    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.488263    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.494216    8006 addons.go:153] Setting addon default-storageclass=true in "no-preload-20211118002250-20973"
	W1118 00:32:27.494245    8006 addons.go:165] addon default-storageclass should already be in state true
	I1118 00:32:27.494274    8006 host.go:66] Checking if "no-preload-20211118002250-20973" exists ...
	I1118 00:32:27.494669    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.494704    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.496036    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1118 00:32:27.496437    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.496589    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38049
	I1118 00:32:27.496874    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.496894    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.496999    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.497385    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.497544    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.497563    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.497589    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetState
	I1118 00:32:27.497860    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.498058    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetState
	I1118 00:32:27.501496    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .DriverName
	I1118 00:32:27.504179    8006 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1118 00:32:27.503070    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .DriverName
	I1118 00:32:27.504345    8006 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1118 00:32:27.504373    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1118 00:32:27.504393    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHHostname
	I1118 00:32:27.506182    8006 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1118 00:32:27.507932    8006 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1118 00:32:27.508000    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1118 00:32:27.508012    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1118 00:32:27.508030    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHHostname
	I1118 00:32:27.511016    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1118 00:32:27.511540    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.512052    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.512075    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.512518    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.512724    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetState
	I1118 00:32:27.513495    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.514350    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:8e:2e", ip: ""} in network mk-no-preload-20211118002250-20973: {Iface:virbr2 ExpiryTime:2021-11-18 01:27:00 +0000 UTC Type:0 Mac:52:54:00:84:8e:2e Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:no-preload-20211118002250-20973 Clientid:01:52:54:00:84:8e:2e}
	I1118 00:32:27.514385    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined IP address 192.168.50.33 and MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.515855    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHPort
	I1118 00:32:27.516042    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHKeyPath
	I1118 00:32:27.516206    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46217
	I1118 00:32:27.516224    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHUsername
	I1118 00:32:27.516393    8006 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/no-preload-20211118002250-20973/id_rsa Username:docker}
	I1118 00:32:27.516689    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.516955    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .DriverName
	I1118 00:32:27.517223    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.517241    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:26.173223    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:28.671578    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:26.289919    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:28.789050    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:27.519870    8006 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1118 00:32:27.519938    8006 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1118 00:32:27.519953    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1118 00:32:27.517337    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.519971    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHHostname
	I1118 00:32:27.520001    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:8e:2e", ip: ""} in network mk-no-preload-20211118002250-20973: {Iface:virbr2 ExpiryTime:2021-11-18 01:27:00 +0000 UTC Type:0 Mac:52:54:00:84:8e:2e Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:no-preload-20211118002250-20973 Clientid:01:52:54:00:84:8e:2e}
	I1118 00:32:27.517566    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.520052    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined IP address 192.168.50.33 and MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.518217    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHPort
	I1118 00:32:27.520304    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHKeyPath
	I1118 00:32:27.520639    8006 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:32:27.520682    8006 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:32:27.520883    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHUsername
	I1118 00:32:27.521026    8006 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/no-preload-20211118002250-20973/id_rsa Username:docker}
	I1118 00:32:27.526587    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.527061    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:8e:2e", ip: ""} in network mk-no-preload-20211118002250-20973: {Iface:virbr2 ExpiryTime:2021-11-18 01:27:00 +0000 UTC Type:0 Mac:52:54:00:84:8e:2e Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:no-preload-20211118002250-20973 Clientid:01:52:54:00:84:8e:2e}
	I1118 00:32:27.527090    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined IP address 192.168.50.33 and MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.527210    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHPort
	I1118 00:32:27.527440    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHKeyPath
	I1118 00:32:27.527624    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHUsername
	I1118 00:32:27.527752    8006 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/no-preload-20211118002250-20973/id_rsa Username:docker}
	I1118 00:32:27.551576    8006 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35289
	I1118 00:32:27.552083    8006 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:32:27.552564    8006 main.go:130] libmachine: Using API Version  1
	I1118 00:32:27.552595    8006 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:32:27.552944    8006 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:32:27.553206    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetState
	I1118 00:32:27.556449    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .DriverName
	I1118 00:32:27.556772    8006 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1118 00:32:27.556790    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1118 00:32:27.556808    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHHostname
	I1118 00:32:27.562947    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.563467    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:8e:2e", ip: ""} in network mk-no-preload-20211118002250-20973: {Iface:virbr2 ExpiryTime:2021-11-18 01:27:00 +0000 UTC Type:0 Mac:52:54:00:84:8e:2e Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:no-preload-20211118002250-20973 Clientid:01:52:54:00:84:8e:2e}
	I1118 00:32:27.563492    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | domain no-preload-20211118002250-20973 has defined IP address 192.168.50.33 and MAC address 52:54:00:84:8e:2e in network mk-no-preload-20211118002250-20973
	I1118 00:32:27.563629    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHPort
	I1118 00:32:27.563818    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHKeyPath
	I1118 00:32:27.563991    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .GetSSHUsername
	I1118 00:32:27.564141    8006 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/no-preload-20211118002250-20973/id_rsa Username:docker}
	I1118 00:32:27.831264    8006 node_ready.go:35] waiting up to 6m0s for node "no-preload-20211118002250-20973" to be "Ready" ...
	I1118 00:32:27.831785    8006 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1118 00:32:27.836041    8006 node_ready.go:49] node "no-preload-20211118002250-20973" has status "Ready":"True"
	I1118 00:32:27.836070    8006 node_ready.go:38] duration metric: took 4.775878ms waiting for node "no-preload-20211118002250-20973" to be "Ready" ...
	I1118 00:32:27.836081    8006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:32:27.859590    8006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:27.952081    8006 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1118 00:32:27.952119    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1118 00:32:28.009568    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1118 00:32:28.009598    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1118 00:32:28.037117    8006 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1118 00:32:28.088693    8006 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1118 00:32:28.221364    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1118 00:32:28.221391    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1118 00:32:28.264065    8006 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1118 00:32:28.264093    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1118 00:32:28.376468    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1118 00:32:28.376499    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1118 00:32:28.691960    8006 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1118 00:32:28.691996    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1118 00:32:28.830256    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1118 00:32:28.830275    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1118 00:32:28.914474    8006 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1118 00:32:29.394425    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1118 00:32:29.394469    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1118 00:32:29.764988    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1118 00:32:29.765013    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1118 00:32:29.892157    8006 pod_ready.go:102] pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:29.976154    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1118 00:32:29.976189    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1118 00:32:30.066555    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1118 00:32:30.066580    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1118 00:32:30.115316    8006 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1118 00:32:30.115360    8006 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1118 00:32:30.123347    8006 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.291522464s)
	I1118 00:32:30.123383    8006 start.go:739] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I1118 00:32:30.277003    8006 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1118 00:32:30.386498    8006 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.349342515s)
	I1118 00:32:30.386552    8006 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297830418s)
	I1118 00:32:30.386590    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.386607    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.386556    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.386671    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.386910    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.386927    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.386937    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.386947    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.387032    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:30.387060    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.387076    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.387097    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.387107    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.387240    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.387248    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:30.387252    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.387265    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.387273    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.387336    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:30.387546    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.387560    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.388599    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.388612    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:27.302612    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:29.803740    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:30.978844    8006 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.064321329s)
	I1118 00:32:30.978896    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.978910    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.979256    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.979258    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:30.979275    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.979284    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:30.979293    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:30.979543    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:30.979608    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:30.979624    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.979635    8006 addons.go:386] Verifying addon metrics-server=true in "no-preload-20211118002250-20973"
	I1118 00:32:31.985786    8006 pod_ready.go:102] pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:32.150164    8006 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.873119327s)
	I1118 00:32:32.150213    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:32.150225    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:32.150523    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) DBG | Closing plugin on server side
	I1118 00:32:32.150577    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:32.150590    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:32.150599    8006 main.go:130] libmachine: Making call to close driver server
	I1118 00:32:32.150608    8006 main.go:130] libmachine: (no-preload-20211118002250-20973) Calling .Close
	I1118 00:32:32.150874    8006 main.go:130] libmachine: Successfully made call to close driver server
	I1118 00:32:32.150898    8006 main.go:130] libmachine: Making call to close connection to plugin binary
	I1118 00:32:30.671634    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:32.674563    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:35.170455    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:31.289399    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:33.788520    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:32.152871    8006 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1118 00:32:32.152903    8006 addons.go:417] enableAddons completed in 4.697310594s
	I1118 00:32:34.392457    8006 pod_ready.go:102] pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:35.903512    8006 pod_ready.go:97] error getting pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-r8wxz" not found
	I1118 00:32:35.903545    8006 pod_ready.go:81] duration metric: took 8.043931895s waiting for pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace to be "Ready" ...
	E1118 00:32:35.903555    8006 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-r8wxz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-r8wxz" not found
	I1118 00:32:35.903563    8006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-rng6f" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.910991    8006 pod_ready.go:92] pod "coredns-78fcd69978-rng6f" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:35.911012    8006 pod_ready.go:81] duration metric: took 7.439895ms waiting for pod "coredns-78fcd69978-rng6f" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.911026    8006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.923915    8006 pod_ready.go:92] pod "etcd-no-preload-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:35.923931    8006 pod_ready.go:81] duration metric: took 12.897654ms waiting for pod "etcd-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.923942    8006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:32.301658    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:34.800075    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:35.941019    8006 pod_ready.go:92] pod "kube-apiserver-no-preload-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:35.941036    8006 pod_ready.go:81] duration metric: took 17.086491ms waiting for pod "kube-apiserver-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.941045    8006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.951248    8006 pod_ready.go:92] pod "kube-controller-manager-no-preload-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:35.951267    8006 pod_ready.go:81] duration metric: took 10.215814ms waiting for pod "kube-controller-manager-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:35.951278    8006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rztx" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:36.079661    8006 pod_ready.go:92] pod "kube-proxy-6rztx" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:36.079689    8006 pod_ready.go:81] duration metric: took 128.402213ms waiting for pod "kube-proxy-6rztx" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:36.079709    8006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:36.480826    8006 pod_ready.go:92] pod "kube-scheduler-no-preload-20211118002250-20973" in "kube-system" namespace has status "Ready":"True"
	I1118 00:32:36.480848    8006 pod_ready.go:81] duration metric: took 401.130932ms waiting for pod "kube-scheduler-no-preload-20211118002250-20973" in "kube-system" namespace to be "Ready" ...
	I1118 00:32:36.480856    8006 pod_ready.go:38] duration metric: took 8.644753318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:32:36.480869    8006 api_server.go:51] waiting for apiserver process to appear ...
	I1118 00:32:36.480917    8006 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1118 00:32:36.501307    8006 api_server.go:71] duration metric: took 9.045779093s to wait for apiserver process to appear ...
	I1118 00:32:36.501327    8006 api_server.go:87] waiting for apiserver healthz status ...
	I1118 00:32:36.501336    8006 api_server.go:240] Checking apiserver healthz at https://192.168.50.33:8443/healthz ...
	I1118 00:32:36.507371    8006 api_server.go:266] https://192.168.50.33:8443/healthz returned 200:
	ok
	I1118 00:32:36.508364    8006 api_server.go:140] control plane version: v1.22.4-rc.0
	I1118 00:32:36.508381    8006 api_server.go:130] duration metric: took 7.048352ms to wait for apiserver health ...
	I1118 00:32:36.508389    8006 system_pods.go:43] waiting for kube-system pods to appear ...
	I1118 00:32:36.682658    8006 system_pods.go:59] 8 kube-system pods found
	I1118 00:32:36.682687    8006 system_pods.go:61] "coredns-78fcd69978-rng6f" [c8310116-4453-4741-8a85-93cb19b62755] Running
	I1118 00:32:36.682693    8006 system_pods.go:61] "etcd-no-preload-20211118002250-20973" [9441a3d3-a084-44ef-a99c-7b3165644c99] Running
	I1118 00:32:36.682702    8006 system_pods.go:61] "kube-apiserver-no-preload-20211118002250-20973" [d6401376-2b6c-44b8-be3c-5e1c6c8cb31d] Running
	I1118 00:32:36.682708    8006 system_pods.go:61] "kube-controller-manager-no-preload-20211118002250-20973" [2d59a270-e1e3-425f-a584-737a2879e276] Running
	I1118 00:32:36.682713    8006 system_pods.go:61] "kube-proxy-6rztx" [de49fd1f-f55e-43d3-89c1-3223cd628a74] Running
	I1118 00:32:36.682719    8006 system_pods.go:61] "kube-scheduler-no-preload-20211118002250-20973" [2cfe953f-7ca0-4846-a084-10c37f746d39] Running
	I1118 00:32:36.682730    8006 system_pods.go:61] "metrics-server-7c784ccb57-nl8r7" [14b0e7f6-ff8e-40ee-9792-21d8ef1e24cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1118 00:32:36.682737    8006 system_pods.go:61] "storage-provisioner" [b274429d-bba2-4b21-ac29-e8cb567d0b9b] Running
	I1118 00:32:36.682746    8006 system_pods.go:74] duration metric: took 174.350174ms to wait for pod list to return data ...
	I1118 00:32:36.682756    8006 default_sa.go:34] waiting for default service account to be created ...
	I1118 00:32:36.880456    8006 default_sa.go:45] found service account: "default"
	I1118 00:32:36.880482    8006 default_sa.go:55] duration metric: took 197.719091ms for default service account to be created ...
	I1118 00:32:36.880490    8006 system_pods.go:116] waiting for k8s-apps to be running ...
	I1118 00:32:37.084662    8006 system_pods.go:86] 8 kube-system pods found
	I1118 00:32:37.084706    8006 system_pods.go:89] "coredns-78fcd69978-rng6f" [c8310116-4453-4741-8a85-93cb19b62755] Running
	I1118 00:32:37.084715    8006 system_pods.go:89] "etcd-no-preload-20211118002250-20973" [9441a3d3-a084-44ef-a99c-7b3165644c99] Running
	I1118 00:32:37.084723    8006 system_pods.go:89] "kube-apiserver-no-preload-20211118002250-20973" [d6401376-2b6c-44b8-be3c-5e1c6c8cb31d] Running
	I1118 00:32:37.084730    8006 system_pods.go:89] "kube-controller-manager-no-preload-20211118002250-20973" [2d59a270-e1e3-425f-a584-737a2879e276] Running
	I1118 00:32:37.084736    8006 system_pods.go:89] "kube-proxy-6rztx" [de49fd1f-f55e-43d3-89c1-3223cd628a74] Running
	I1118 00:32:37.084743    8006 system_pods.go:89] "kube-scheduler-no-preload-20211118002250-20973" [2cfe953f-7ca0-4846-a084-10c37f746d39] Running
	I1118 00:32:37.084754    8006 system_pods.go:89] "metrics-server-7c784ccb57-nl8r7" [14b0e7f6-ff8e-40ee-9792-21d8ef1e24cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1118 00:32:37.084766    8006 system_pods.go:89] "storage-provisioner" [b274429d-bba2-4b21-ac29-e8cb567d0b9b] Running
	I1118 00:32:37.084778    8006 system_pods.go:126] duration metric: took 204.280836ms to wait for k8s-apps to be running ...
	I1118 00:32:37.084794    8006 system_svc.go:44] waiting for kubelet service to be running ....
	I1118 00:32:37.084847    8006 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1118 00:32:37.108443    8006 system_svc.go:56] duration metric: took 23.642069ms WaitForService to wait for kubelet.
	I1118 00:32:37.108475    8006 kubeadm.go:547] duration metric: took 9.652951156s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1118 00:32:37.108495    8006 node_conditions.go:102] verifying NodePressure condition ...
	I1118 00:32:37.281665    8006 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1118 00:32:37.281694    8006 node_conditions.go:123] node cpu capacity is 2
	I1118 00:32:37.281764    8006 node_conditions.go:105] duration metric: took 173.262642ms to run NodePressure ...
	I1118 00:32:37.281778    8006 start.go:234] waiting for startup goroutines ...
	I1118 00:32:37.335321    8006 start.go:486] kubectl: 1.20.5, cluster: 1.22.4-rc.0 (minor skew: 2)
	I1118 00:32:37.337620    8006 out.go:176] 
	W1118 00:32:37.337833    8006 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.4-rc.0.
	I1118 00:32:37.339528    8006 out.go:176]   - Want kubectl v1.22.4-rc.0? Try 'minikube kubectl -- get pods -A'
	I1118 00:32:37.341199    8006 out.go:176] * Done! kubectl is now configured to use "no-preload-20211118002250-20973" cluster and "default" namespace by default
	I1118 00:32:37.172419    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:39.671464    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:36.287648    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:38.288620    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:40.290329    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:36.800567    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:38.801747    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:40.801965    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:41.671503    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:44.171737    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:42.786516    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:44.788335    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:43.299476    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:45.799883    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:46.173295    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:48.674805    7887 pod_ready.go:102] pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:47.285719    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:49.288903    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:47.800311    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:50.299991    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:50.661300    7887 pod_ready.go:81] duration metric: took 4m0.39845208s waiting for pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace to be "Ready" ...
	E1118 00:32:50.661323    7887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-n9gx9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1118 00:32:50.661340    7887 pod_ready.go:38] duration metric: took 4m44.829417626s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1118 00:32:50.661366    7887 kubeadm.go:604] restartCluster took 5m42.43167872s
	W1118 00:32:50.661511    7887 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1118 00:32:50.661542    7887 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1118 00:32:53.225982    7887 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.564415743s)
	I1118 00:32:53.226057    7887 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I1118 00:32:53.245838    7887 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1118 00:32:53.245938    7887 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1118 00:32:53.283873    7887 cri.go:76] found id: ""
	I1118 00:32:53.283937    7887 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1118 00:32:53.297587    7887 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1118 00:32:53.312701    7887 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1118 00:32:53.312732    7887 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I1118 00:32:53.763336    7887 out.go:203]   - Generating certificates and keys ...
	I1118 00:32:55.081545    7887 out.go:203]   - Booting up control plane ...
	I1118 00:32:51.290595    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:53.788713    8300 pod_ready.go:102] pod "metrics-server-8546d8b77b-v2gl2" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:52.303169    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	I1118 00:32:54.803067    8641 pod_ready.go:102] pod "metrics-server-7c784ccb57-wtbjh" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	97b14e302db79       523cad1a4df73       3 seconds ago       Exited              dashboard-metrics-scraper   2                   a9edb10994be7
	61e3a4068cd06       e1482a24335a6       14 seconds ago      Running             kubernetes-dashboard        0                   4389ab91e6950
	9f893ada9ebfe       6e38f40d628db       24 seconds ago      Running             storage-provisioner         0                   ebc580d2d4fa4
	d2c381498676f       8d147537fb7d1       28 seconds ago      Running             coredns                     0                   3ba60ac8e4e15
	f72bf19c2b8ed       10c9f2e987d6f       30 seconds ago      Running             kube-proxy                  0                   cc4ad9f5d53bf
	6b6cb829e1c19       0048118155842       55 seconds ago      Running             etcd                        2                   c770c9b2ef638
	79c7ebb4c5f3e       07b5fb2b707e6       55 seconds ago      Running             kube-scheduler              2                   81223be20699e
	04e1fb4fb0435       32b7de249e8c8       55 seconds ago      Running             kube-apiserver              2                   c57ea57c7aa68
	48abaccdd682a       db1784d0aa92a       56 seconds ago      Running             kube-controller-manager     2                   1139b880bf89b
	c5d43e093ec99       8d147537fb7d1       5 minutes ago       Exited              coredns                     1                   2097a294415c5
	cbfb971eef3b8       56cc512116c8f       5 minutes ago       Exited              busybox                     1                   e49532acaf0ef
	
	* 
	* ==> containerd <==
	* -- Journal begins at Thu 2021-11-18 00:26:59 UTC, ends at Thu 2021-11-18 00:32:57 UTC. --
	Nov 18 00:32:40 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:40.722977109Z" level=error msg="copy shim log" error="read /proc/self/fd/94: file already closed"
	Nov 18 00:32:40 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:40.833091163Z" level=info msg="RemoveContainer for \"7e6dd3659bef022a57627a6634cdde7437f6ba397371df0ffb88b35e0e63b08f\""
	Nov 18 00:32:40 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:40.845915548Z" level=info msg="RemoveContainer for \"7e6dd3659bef022a57627a6634cdde7437f6ba397371df0ffb88b35e0e63b08f\" returns successfully"
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.788701179Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.794686074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.796992543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.798315388Z" level=info msg="PullImage \"kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e\" returns image reference \"sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570\""
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.809426573Z" level=info msg="CreateContainer within sandbox \"4389ab91e6950ac5666a94564ecfd3fa2412dc9a6d150dc8d48e72650ce2b842\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.917178725Z" level=info msg="CreateContainer within sandbox \"4389ab91e6950ac5666a94564ecfd3fa2412dc9a6d150dc8d48e72650ce2b842\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"61e3a4068cd06bf34943509ec5f5dc252cadeaf44812b2baca7698a475981c16\""
	Nov 18 00:32:43 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:43.918173115Z" level=info msg="StartContainer for \"61e3a4068cd06bf34943509ec5f5dc252cadeaf44812b2baca7698a475981c16\""
	Nov 18 00:32:44 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:44.060804429Z" level=info msg="StartContainer for \"61e3a4068cd06bf34943509ec5f5dc252cadeaf44812b2baca7698a475981c16\" returns successfully"
	Nov 18 00:32:44 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:44.417088504Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Nov 18 00:32:44 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:44.430854124Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Nov 18 00:32:44 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:44.434249448Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.428823042Z" level=info msg="CreateContainer within sandbox \"a9edb10994be7e71699c02609dc1273dc3d45eca5c489c4f8b3e8a0515a18e55\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.479808339Z" level=info msg="CreateContainer within sandbox \"a9edb10994be7e71699c02609dc1273dc3d45eca5c489c4f8b3e8a0515a18e55\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c\""
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.481096859Z" level=info msg="StartContainer for \"97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c\""
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.969856031Z" level=info msg="StartContainer for \"97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c\" returns successfully"
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.986188195Z" level=info msg="TaskExit event &TaskExit{ContainerID:97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c,ID:97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c,Pid:6371,ExitStatus:1,ExitedAt:2021-11-18 00:32:54.98591702 +0000 UTC,XXX_unrecognized:[],}"
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.986655032Z" level=info msg="Finish piping stdout of container \"97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c\""
	Nov 18 00:32:54 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:54.988392126Z" level=info msg="Finish piping stderr of container \"97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c\""
	Nov 18 00:32:55 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:55.041575081Z" level=info msg="shim disconnected" id=97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c
	Nov 18 00:32:55 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:55.041763475Z" level=error msg="copy shim log" error="read /proc/self/fd/129: file already closed"
	Nov 18 00:32:55 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:55.899311108Z" level=info msg="RemoveContainer for \"9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc\""
	Nov 18 00:32:55 no-preload-20211118002250-20973 containerd[2104]: time="2021-11-18T00:32:55.907552660Z" level=info msg="RemoveContainer for \"9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc\" returns successfully"
	
	* 
	* ==> coredns [c5d43e093ec99cc97dd5696b2a2660a5fc1ae66728dac8c72c75862fd0063b9c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [d2c381498676ffd5927af634f13367ca21be6af97c77f1ca5657a90562a51f70] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20211118002250-20973
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20211118002250-20973
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b7b0a42f687dae576880a10f0aa2f899d9174438
	                    minikube.k8s.io/name=no-preload-20211118002250-20973
	                    minikube.k8s.io/updated_at=2021_11_18T00_32_13_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 18 Nov 2021 00:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20211118002250-20973
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 18 Nov 2021 00:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 18 Nov 2021 00:32:48 +0000   Thu, 18 Nov 2021 00:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 18 Nov 2021 00:32:48 +0000   Thu, 18 Nov 2021 00:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 18 Nov 2021 00:32:48 +0000   Thu, 18 Nov 2021 00:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 18 Nov 2021 00:32:48 +0000   Thu, 18 Nov 2021 00:32:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.33
	  Hostname:    no-preload-20211118002250-20973
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0202b443f28d418e9a190d954e21be05
	  System UUID:                0202b443-f28d-418e-9a19-0d954e21be05
	  Boot ID:                    ad153293-ffb9-4c92-802d-7908106bbc02
	  Kernel Version:             4.19.202
	  OS Image:                   Buildroot 2021.02.4
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.4-rc.0
	  Kube-Proxy Version:         v1.22.4-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-rng6f                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     32s
	  kube-system                 etcd-no-preload-20211118002250-20973                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         46s
	  kube-system                 kube-apiserver-no-preload-20211118002250-20973             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-controller-manager-no-preload-20211118002250-20973    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-proxy-6rztx                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-scheduler-no-preload-20211118002250-20973             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 metrics-server-7c784ccb57-nl8r7                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (14%!)(MISSING)      0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-s7fs9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        kubernetes-dashboard-654cf69797-44bzg                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             470Mi (22%!)(MISSING)  170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 28s                kube-proxy  
	  Normal  NodeHasSufficientMemory  58s (x6 over 58s)  kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x5 over 58s)  kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x5 over 58s)  kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  40s                kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet     Node no-preload-20211118002250-20973 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 40s                kubelet     Starting kubelet.
	  Normal  NodeReady                32s                kubelet     Node no-preload-20211118002250-20973 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[  +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.004575] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.032644] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.320061] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
	[Nov18 00:27] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +4.914355] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005770] vboxguest: PCI device not found, probably running on physical hardware.
	[  +3.080881] systemd-fstab-generator[2057]: Ignoring "noauto" for root device
	[  +0.131523] systemd-fstab-generator[2068]: Ignoring "noauto" for root device
	[  +0.267820] systemd-fstab-generator[2096]: Ignoring "noauto" for root device
	[ +13.859949] systemd-fstab-generator[2461]: Ignoring "noauto" for root device
	[ +17.751419] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.849633] kauditd_printk_skb: 89 callbacks suppressed
	[  +9.866355] kauditd_printk_skb: 44 callbacks suppressed
	[Nov18 00:28] kauditd_printk_skb: 2 callbacks suppressed
	[Nov18 00:29] NFSD: Unable to end grace period: -110
	[Nov18 00:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.985312] systemd-fstab-generator[4561]: Ignoring "noauto" for root device
	[Nov18 00:32] systemd-fstab-generator[4930]: Ignoring "noauto" for root device
	[ +14.568889] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.070219] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.777267] kauditd_printk_skb: 68 callbacks suppressed
	
	* 
	* ==> etcd [6b6cb829e1c19581f6cbff0d077cb8b42ea9a9d638ad7152952d67063fdaf037] <==
	* {"level":"info","ts":"2021-11-18T00:32:04.496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 switched to configuration voters=(10813892840880912868)"}
	{"level":"info","ts":"2021-11-18T00:32:04.497Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"48a667124f087be6","local-member-id":"9612aa3e8bd8b9e4","added-peer-id":"9612aa3e8bd8b9e4","added-peer-peer-urls":["https://192.168.50.33:2380"]}
	{"level":"info","ts":"2021-11-18T00:32:04.501Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-11-18T00:32:04.502Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"9612aa3e8bd8b9e4","initial-advertise-peer-urls":["https://192.168.50.33:2380"],"listen-peer-urls":["https://192.168.50.33:2380"],"advertise-client-urls":["https://192.168.50.33:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.33:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-11-18T00:32:04.502Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-11-18T00:32:04.503Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2021-11-18T00:32:04.503Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2021-11-18T00:32:05.466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 is starting a new election at term 1"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 received MsgPreVoteResp from 9612aa3e8bd8b9e4 at term 1"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became candidate at term 2"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 received MsgVoteResp from 9612aa3e8bd8b9e4 at term 2"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became leader at term 2"}
	{"level":"info","ts":"2021-11-18T00:32:05.467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9612aa3e8bd8b9e4 elected leader 9612aa3e8bd8b9e4 at term 2"}
	{"level":"info","ts":"2021-11-18T00:32:05.468Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:32:05.469Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"48a667124f087be6","local-member-id":"9612aa3e8bd8b9e4","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:32:05.470Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:32:05.470Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-11-18T00:32:05.470Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"9612aa3e8bd8b9e4","local-member-attributes":"{Name:no-preload-20211118002250-20973 ClientURLs:[https://192.168.50.33:2379]}","request-path":"/0/members/9612aa3e8bd8b9e4/attributes","cluster-id":"48a667124f087be6","publish-timeout":"7s"}
	{"level":"info","ts":"2021-11-18T00:32:05.470Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:32:05.470Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-11-18T00:32:05.472Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.50.33:2379"}
	{"level":"info","ts":"2021-11-18T00:32:05.472Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-11-18T00:32:05.472Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-11-18T00:32:05.472Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:32:58 up 6 min,  0 users,  load average: 2.20, 0.95, 0.40
	Linux no-preload-20211118002250-20973 4.19.202 #1 SMP Wed Oct 27 22:52:27 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.4"
	
	* 
	* ==> kube-apiserver [04e1fb4fb04357347aaf1617c80bccb5e1213c2b3828fc651088fe85abf3e096] <==
	* I1118 00:32:09.186213       1 cache.go:39] Caches are synced for autoregister controller
	I1118 00:32:09.186884       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1118 00:32:09.218348       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1118 00:32:09.227964       1 controller.go:611] quota admission added evaluator for: namespaces
	I1118 00:32:09.975790       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1118 00:32:09.975926       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1118 00:32:09.994924       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1118 00:32:10.017143       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1118 00:32:10.018235       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1118 00:32:11.097772       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1118 00:32:11.182132       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1118 00:32:11.389738       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.50.33]
	I1118 00:32:11.391527       1 controller.go:611] quota admission added evaluator for: endpoints
	I1118 00:32:11.403495       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1118 00:32:12.124506       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1118 00:32:12.993544       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1118 00:32:13.072263       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1118 00:32:18.337792       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1118 00:32:26.469740       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1118 00:32:26.562260       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1118 00:32:29.372657       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W1118 00:32:32.779264       1 handler_proxy.go:103] no RequestInfo found in the context
	E1118 00:32:32.779424       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1118 00:32:32.779449       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [48abaccdd682af5a967cce898242e2e381681e499e3e3c66100804636089cf59] <==
	* I1118 00:32:30.602014       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-nl8r7"
	I1118 00:32:31.494028       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I1118 00:32:31.567342       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.599077       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.632236       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-654cf69797 to 1"
	E1118 00:32:31.633235       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.633553       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1118 00:32:31.677139       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.705087       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.721158       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.720935       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1118 00:32:31.744813       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.746466       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.765855       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1118 00:32:31.767691       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.768633       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1118 00:32:31.771627       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.777176       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.777190       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1118 00:32:31.788812       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-654cf69797" failed with pods "kubernetes-dashboard-654cf69797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1118 00:32:31.790695       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-654cf69797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1118 00:32:31.827121       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-s7fs9"
	I1118 00:32:31.827762       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-654cf69797" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-654cf69797-44bzg"
	E1118 00:32:56.476474       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1118 00:32:56.914381       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f72bf19c2b8edf59655ee36fe67b385f3625fb787ba55d080b34829e0f20be76] <==
	* I1118 00:32:28.979145       1 node.go:172] Successfully retrieved node IP: 192.168.50.33
	I1118 00:32:28.979205       1 server_others.go:140] Detected node IP 192.168.50.33
	W1118 00:32:28.979231       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W1118 00:32:29.269110       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I1118 00:32:29.269148       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I1118 00:32:29.269162       1 server_others.go:212] Using iptables Proxier.
	I1118 00:32:29.270362       1 server.go:649] Version: v1.22.4-rc.0
	I1118 00:32:29.275388       1 config.go:224] Starting endpoint slice config controller
	I1118 00:32:29.275415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1118 00:32:29.276018       1 config.go:315] Starting service config controller
	I1118 00:32:29.276027       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1118 00:32:29.399465       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1118 00:32:29.476980       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [79c7ebb4c5f3e1bafa3cdd0e382fce72ef51cbef399cd3a83e8471eacf417567] <==
	* E1118 00:32:09.225848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1118 00:32:09.225930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1118 00:32:09.226023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1118 00:32:09.226099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:09.226178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:32:09.226246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1118 00:32:09.226317       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1118 00:32:09.226403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:10.070145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1118 00:32:10.115036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1118 00:32:10.220121       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1118 00:32:10.228834       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:10.244118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:10.357713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1118 00:32:10.393900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1118 00:32:10.467466       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1118 00:32:10.515712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1118 00:32:10.530408       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:10.618677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1118 00:32:10.667493       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1118 00:32:10.699212       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1118 00:32:10.735649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1118 00:32:12.218515       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1118 00:32:12.363106       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I1118 00:32:13.793720       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2021-11-18 00:26:59 UTC, ends at Thu 2021-11-18 00:32:58 UTC. --
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:35.303868    4937 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ec491e8-0e2e-4600-aaeb-7a5b8d2bbb3b-config-volume\") on node \"no-preload-20211118002250-20973\" DevicePath \"\""
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:35.303905    4937 reconciler.go:319] "Volume detached for volume \"kube-api-access-9n2w8\" (UniqueName: \"kubernetes.io/projected/2ec491e8-0e2e-4600-aaeb-7a5b8d2bbb3b-kube-api-access-9n2w8\") on node \"no-preload-20211118002250-20973\" DevicePath \"\""
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:35.801388    4937 scope.go:110] "RemoveContainer" containerID="ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255"
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:35.869999    4937 scope.go:110] "RemoveContainer" containerID="ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255"
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:35.872295    4937 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255\": not found" containerID="ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255"
	Nov 18 00:32:35 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:35.872933    4937 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255} err="failed to get container status \"ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad20f0afb10c30d14346fc1f084e0f0d5b10eda4863e77d0cb3dc8d6bad85255\": not found"
	Nov 18 00:32:36 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:36.410022    4937 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2ec491e8-0e2e-4600-aaeb-7a5b8d2bbb3b path="/var/lib/kubelet/pods/2ec491e8-0e2e-4600-aaeb-7a5b8d2bbb3b/volumes"
	Nov 18 00:32:39 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:39.239246    4937 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podc8310116-4453-4741-8a85-93cb19b62755\": RecentStats: unable to find data in memory cache]"
	Nov 18 00:32:39 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:39.818418    4937 scope.go:110] "RemoveContainer" containerID="7e6dd3659bef022a57627a6634cdde7437f6ba397371df0ffb88b35e0e63b08f"
	Nov 18 00:32:40 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:40.822344    4937 scope.go:110] "RemoveContainer" containerID="7e6dd3659bef022a57627a6634cdde7437f6ba397371df0ffb88b35e0e63b08f"
	Nov 18 00:32:40 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:40.822813    4937 scope.go:110] "RemoveContainer" containerID="9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc"
	Nov 18 00:32:40 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:40.823033    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-s7fs9_kubernetes-dashboard(23ce4f09-f4df-40fc-8075-4bf4faca397f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-s7fs9" podUID=23ce4f09-f4df-40fc-8075-4bf4faca397f
	Nov 18 00:32:41 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:41.838003    4937 scope.go:110] "RemoveContainer" containerID="9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc"
	Nov 18 00:32:41 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:41.838355    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-s7fs9_kubernetes-dashboard(23ce4f09-f4df-40fc-8075-4bf4faca397f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-s7fs9" podUID=23ce4f09-f4df-40fc-8075-4bf4faca397f
	Nov 18 00:32:42 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:42.840945    4937 scope.go:110] "RemoveContainer" containerID="9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc"
	Nov 18 00:32:42 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:42.841528    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-s7fs9_kubernetes-dashboard(23ce4f09-f4df-40fc-8075-4bf4faca397f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-s7fs9" podUID=23ce4f09-f4df-40fc-8075-4bf4faca397f
	Nov 18 00:32:44 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:44.436124    4937 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Nov 18 00:32:44 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:44.440847    4937 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Nov 18 00:32:44 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:44.442902    4937 kuberuntime_manager.go:898] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4485s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-nl8r7_kube-system(14b0e7f6-ff8e-40ee-9792-21d8ef1e24cd): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Nov 18 00:32:44 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:44.443263    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-nl8r7" podUID=14b0e7f6-ff8e-40ee-9792-21d8ef1e24cd
	Nov 18 00:32:54 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:54.402206    4937 scope.go:110] "RemoveContainer" containerID="9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc"
	Nov 18 00:32:55 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:55.890177    4937 scope.go:110] "RemoveContainer" containerID="9fcebaf1ff0fb7767c83616454b5731f9a00d4a7265492b3e68f9e317acca8fc"
	Nov 18 00:32:55 no-preload-20211118002250-20973 kubelet[4937]: I1118 00:32:55.890565    4937 scope.go:110] "RemoveContainer" containerID="97b14e302db79fa31173cadbf48e27f25bbc7e3de04a4c128b479336beb1328c"
	Nov 18 00:32:55 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:55.890877    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-s7fs9_kubernetes-dashboard(23ce4f09-f4df-40fc-8075-4bf4faca397f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-s7fs9" podUID=23ce4f09-f4df-40fc-8075-4bf4faca397f
	Nov 18 00:32:56 no-preload-20211118002250-20973 kubelet[4937]: E1118 00:32:56.404887    4937 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-nl8r7" podUID=14b0e7f6-ff8e-40ee-9792-21d8ef1e24cd
	
	* 
	* ==> kubernetes-dashboard [61e3a4068cd06bf34943509ec5f5dc252cadeaf44812b2baca7698a475981c16] <==
	* 2021/11/18 00:32:44 Using namespace: kubernetes-dashboard
	2021/11/18 00:32:44 Using in-cluster config to connect to apiserver
	2021/11/18 00:32:44 Using secret token for csrf signing
	2021/11/18 00:32:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/11/18 00:32:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/11/18 00:32:44 Successful initial request to the apiserver, version: v1.22.4-rc.0
	2021/11/18 00:32:44 Generating JWE encryption key
	2021/11/18 00:32:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/11/18 00:32:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/11/18 00:32:44 Initializing JWE encryption key from synchronized object
	2021/11/18 00:32:44 Creating in-cluster Sidecar client
	2021/11/18 00:32:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/11/18 00:32:44 Serving insecurely on HTTP port: 9090
	2021/11/18 00:32:44 Starting overwatch
	
	* 
	* ==> storage-provisioner [9f893ada9ebfedf64d8d3e885ff4f78d890a70ccc3519e515557db038f915c66] <==
	* I1118 00:32:33.699005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1118 00:32:33.816348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1118 00:32:33.816879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1118 00:32:33.857522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1118 00:32:33.863362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6e2a77d-9287-4b67-a7f3-2a02a83e0c60", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20211118002250-20973_a48b3672-7b71-4d15-95b7-4642f1a5995f became leader
	I1118 00:32:33.863628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20211118002250-20973_a48b3672-7b71-4d15-95b7-4642f1a5995f!
	I1118 00:32:33.964117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20211118002250-20973_a48b3672-7b71-4d15-95b7-4642f1a5995f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-7c784ccb57-nl8r7
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 describe pod metrics-server-7c784ccb57-nl8r7
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20211118002250-20973 describe pod metrics-server-7c784ccb57-nl8r7: exit status 1 (72.554376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-nl8r7" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20211118002250-20973 describe pod metrics-server-7c784ccb57-nl8r7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.45s)

                                                
                                    

Test pass (250/285)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 10.86
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.22.3/json-events 6.75
11 TestDownloadOnly/v1.22.3/preload-exists 0
15 TestDownloadOnly/v1.22.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.4-rc.0/json-events 8.43
18 TestDownloadOnly/v1.22.4-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.4-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 93.91
28 TestAddons/Setup 152.83
30 TestAddons/parallel/Registry 15.38
31 TestAddons/parallel/Ingress 45.89
32 TestAddons/parallel/MetricsServer 6.07
33 TestAddons/parallel/HelmTiller 13.32
34 TestAddons/parallel/Olm 67.24
35 TestAddons/parallel/CSI 65.17
37 TestAddons/serial/GCPAuth 42.93
38 TestAddons/StoppedEnableDisable 93.65
39 TestCertOptions 82.12
40 TestCertExpiration 260.95
42 TestForceSystemdFlag 74.76
43 TestForceSystemdEnv 85.04
44 TestKVMDriverInstallOrUpdate 6.92
48 TestErrorSpam/setup 56.56
49 TestErrorSpam/start 0.43
50 TestErrorSpam/status 0.8
51 TestErrorSpam/pause 3.53
52 TestErrorSpam/unpause 1.61
53 TestErrorSpam/stop 5.25
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 84.55
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 27.13
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.19
65 TestFunctional/serial/CacheCmd/cache/add_local 1.83
67 TestFunctional/serial/CacheCmd/cache/list 0.05
71 TestFunctional/serial/MinikubeKubectlCmd 0.11
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 37.6
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.38
76 TestFunctional/serial/LogsFileCmd 1.46
78 TestFunctional/parallel/ConfigCmd 0.43
79 TestFunctional/parallel/DashboardCmd 6.26
80 TestFunctional/parallel/DryRun 0.31
81 TestFunctional/parallel/InternationalLanguage 0.2
82 TestFunctional/parallel/StatusCmd 0.89
85 TestFunctional/parallel/ServiceCmd 21.06
86 TestFunctional/parallel/AddonsCmd 0.16
87 TestFunctional/parallel/PersistentVolumeClaim 36.81
89 TestFunctional/parallel/SSHCmd 0.5
90 TestFunctional/parallel/CpCmd 0.55
91 TestFunctional/parallel/MySQL 24.46
92 TestFunctional/parallel/FileSync 0.23
93 TestFunctional/parallel/CertSync 1.68
97 TestFunctional/parallel/NodeLabels 0.08
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
104 TestFunctional/parallel/Version/short 0.07
105 TestFunctional/parallel/Version/components 0.75
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
107 TestFunctional/parallel/ProfileCmd/profile_list 0.39
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.32
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
114 TestFunctional/parallel/MountCmd/specific-port 1.57
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
121 TestFunctional/parallel/ImageCommands/ImageList 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.72
123 TestFunctional/parallel/ImageCommands/Setup 0.75
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.39
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.9
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.76
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.76
129 TestFunctional/delete_addon-resizer_images 0.1
130 TestFunctional/delete_my-image_image 0.04
131 TestFunctional/delete_minikube_cached_images 0.04
134 TestIngressAddonLegacy/StartLegacyK8sCluster 76.84
136 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.83
137 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.41
138 TestIngressAddonLegacy/serial/ValidateIngressAddons 60.22
141 TestJSONOutput/start/Command 79.3
142 TestJSONOutput/start/Audit 0
144 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/pause/Command 0.7
148 TestJSONOutput/pause/Audit 0
150 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/unpause/Command 0.68
154 TestJSONOutput/unpause/Audit 0
156 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/stop/Command 2.1
160 TestJSONOutput/stop/Audit 0
162 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
164 TestErrorJSONOutput 0.32
167 TestMainNoArgs 0.05
170 TestMountStart/serial/StartWithMountFirst 56.03
171 TestMountStart/serial/StartWithMountSecond 57.49
172 TestMountStart/serial/VerifyMountFirst 0.19
173 TestMountStart/serial/VerifyMountSecond 0.2
174 TestMountStart/serial/DeleteFirst 0.99
175 TestMountStart/serial/VerifyMountPostDelete 0.2
176 TestMountStart/serial/Stop 2.3
177 TestMountStart/serial/RestartStopped 87.31
178 TestMountStart/serial/VerifyMountPostStop 0.21
181 TestMultiNode/serial/FreshStart2Nodes 138.28
182 TestMultiNode/serial/DeployApp2Nodes 5.6
183 TestMultiNode/serial/PingHostFrom2Pods 0.99
184 TestMultiNode/serial/AddNode 53.38
185 TestMultiNode/serial/ProfileList 0.24
186 TestMultiNode/serial/CopyFile 1.85
187 TestMultiNode/serial/StopNode 2.96
188 TestMultiNode/serial/StartAfterStop 49.78
189 TestMultiNode/serial/RestartKeepsNodes 509.18
190 TestMultiNode/serial/DeleteNode 2.26
191 TestMultiNode/serial/StopMultiNode 184.38
192 TestMultiNode/serial/RestartMultiNode 216.27
193 TestMultiNode/serial/ValidateNameConflict 60
198 TestPreload 119.15
200 TestScheduledStopUnix 128.84
204 TestRunningBinaryUpgrade 235.03
206 TestKubernetesUpgrade 237.84
209 TestNoKubernetes/serial/Start 55.36
210 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
211 TestNoKubernetes/serial/ProfileList 1.45
212 TestNoKubernetes/serial/Stop 2.11
213 TestNoKubernetes/serial/StartNoArgs 28.38
222 TestPause/serial/Start 106.48
223 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
224 TestStoppedBinaryUpgrade/Setup 0.44
225 TestStoppedBinaryUpgrade/Upgrade 163.29
226 TestPause/serial/SecondStartNoReconfiguration 44.12
234 TestNetworkPlugins/group/false 0.45
235 TestPause/serial/Pause 0.95
239 TestPause/serial/VerifyStatus 0.31
240 TestPause/serial/Unpause 0.96
241 TestPause/serial/PauseAgain 5.54
242 TestPause/serial/DeletePaused 1.03
243 TestPause/serial/VerifyDeletedResources 0.36
244 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
246 TestStartStop/group/old-k8s-version/serial/FirstStart 146.91
248 TestStartStop/group/no-preload/serial/FirstStart 120.58
250 TestStartStop/group/embed-certs/serial/FirstStart 106.02
251 TestStartStop/group/no-preload/serial/DeployApp 10.64
252 TestStartStop/group/embed-certs/serial/DeployApp 8.55
253 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
254 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
255 TestStartStop/group/embed-certs/serial/Stop 92.7
256 TestStartStop/group/no-preload/serial/Stop 92.51
257 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
258 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
259 TestStartStop/group/old-k8s-version/serial/Stop 94.99
261 TestStartStop/group/default-k8s-different-port/serial/FirstStart 87.33
262 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
263 TestStartStop/group/embed-certs/serial/SecondStart 424.28
264 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
265 TestStartStop/group/no-preload/serial/SecondStart 361.71
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 2.71
267 TestStartStop/group/old-k8s-version/serial/SecondStart 528.76
268 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.64
269 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.21
270 TestStartStop/group/default-k8s-different-port/serial/Stop 92.52
271 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.17
272 TestStartStop/group/default-k8s-different-port/serial/SecondStart 389.77
273 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.02
274 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
276 TestStartStop/group/no-preload/serial/Pause 2.85
278 TestStartStop/group/newest-cni/serial/FirstStart 72.7
279 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.02
280 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
281 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
282 TestStartStop/group/embed-certs/serial/Pause 2.98
283 TestNetworkPlugins/group/auto/Start 84.36
284 TestStartStop/group/newest-cni/serial/DeployApp 0
285 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.29
286 TestStartStop/group/newest-cni/serial/Stop 5.18
287 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
288 TestStartStop/group/newest-cni/serial/SecondStart 84.51
289 TestNetworkPlugins/group/auto/KubeletFlags 0.23
290 TestNetworkPlugins/group/auto/NetCatPod 10.53
291 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.02
292 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.1
293 TestNetworkPlugins/group/auto/DNS 0.22
294 TestNetworkPlugins/group/auto/Localhost 0.19
295 TestNetworkPlugins/group/auto/HairPin 0.18
296 TestNetworkPlugins/group/kindnet/Start 102.18
297 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.27
298 TestStartStop/group/default-k8s-different-port/serial/Pause 2.85
299 TestNetworkPlugins/group/cilium/Start 126.97
300 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
301 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
302 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
303 TestStartStop/group/newest-cni/serial/Pause 2.3
304 TestNetworkPlugins/group/calico/Start 131.58
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 9.04
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
308 TestStartStop/group/old-k8s-version/serial/Pause 2.95
309 TestNetworkPlugins/group/custom-weave/Start 105.63
310 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
312 TestNetworkPlugins/group/kindnet/NetCatPod 12.79
313 TestNetworkPlugins/group/kindnet/DNS 0.34
314 TestNetworkPlugins/group/kindnet/Localhost 0.26
315 TestNetworkPlugins/group/kindnet/HairPin 0.26
316 TestNetworkPlugins/group/enable-default-cni/Start 92.27
317 TestNetworkPlugins/group/cilium/ControllerPod 5.05
318 TestNetworkPlugins/group/cilium/KubeletFlags 0.25
319 TestNetworkPlugins/group/cilium/NetCatPod 13.59
320 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.21
321 TestNetworkPlugins/group/custom-weave/NetCatPod 13.84
322 TestNetworkPlugins/group/calico/ControllerPod 6.24
323 TestNetworkPlugins/group/cilium/DNS 0.35
324 TestNetworkPlugins/group/cilium/Localhost 0.22
325 TestNetworkPlugins/group/cilium/HairPin 0.25
326 TestNetworkPlugins/group/calico/KubeletFlags 2.47
327 TestNetworkPlugins/group/calico/NetCatPod 12.82
328 TestNetworkPlugins/group/flannel/Start 83.81
329 TestNetworkPlugins/group/bridge/Start 102.14
330 TestNetworkPlugins/group/calico/DNS 0.37
331 TestNetworkPlugins/group/calico/Localhost 0.22
332 TestNetworkPlugins/group/calico/HairPin 0.2
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.56
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
338 TestNetworkPlugins/group/flannel/ControllerPod 8.02
339 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
340 TestNetworkPlugins/group/flannel/NetCatPod 10.54
341 TestNetworkPlugins/group/flannel/DNS 0.21
342 TestNetworkPlugins/group/flannel/Localhost 0.18
343 TestNetworkPlugins/group/flannel/HairPin 0.17
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
345 TestNetworkPlugins/group/bridge/NetCatPod 11.46
346 TestNetworkPlugins/group/bridge/DNS 0.24
347 TestNetworkPlugins/group/bridge/Localhost 0.19
348 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.14.0/json-events (10.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.862393533s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (10.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211117233428-20973
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211117233428-20973: exit status 85 (66.621319ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 23:34:28
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 23:34:28.139800   20985 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:34:28.139893   20985 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:28.139912   20985 out.go:310] Setting ErrFile to fd 2...
	I1117 23:34:28.139919   20985 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:28.140029   20985 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 23:34:28.140138   20985 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 23:34:28.140370   20985 out.go:304] Setting JSON to true
	I1117 23:34:28.175928   20985 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4630,"bootTime":1637187438,"procs":153,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1117 23:34:28.176029   20985 start.go:122] virtualization: kvm guest
	I1117 23:34:28.179168   20985 notify.go:174] Checking for updates...
	W1117 23:34:28.179189   20985 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 23:34:28.181728   20985 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:34:28.210994   20985 start.go:280] selected driver: kvm2
	I1117 23:34:28.211018   20985 start.go:775] validating driver "kvm2" against <nil>
	I1117 23:34:28.211633   20985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:28.211854   20985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 23:34:28.222412   20985 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.24.0
	I1117 23:34:28.222474   20985 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 23:34:28.222892   20985 start_flags.go:349] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I1117 23:34:28.222969   20985 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 23:34:28.223003   20985 cni.go:93] Creating CNI manager for ""
	I1117 23:34:28.223022   20985 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1117 23:34:28.223032   20985 start_flags.go:277] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 23:34:28.223040   20985 start_flags.go:282] config:
	{Name:download-only-20211117233428-20973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117233428-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:34:28.223170   20985 iso.go:123] acquiring lock: {Name:mk8cca007fc20acac1c2951039d04ddec7641ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:28.225510   20985 download.go:100] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/iso/minikube-v1.24.0.iso
	I1117 23:34:29.947118   20985 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I1117 23:34:29.977646   20985 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I1117 23:34:29.977683   20985 cache.go:57] Caching tarball of preloaded images
	I1117 23:34:29.977893   20985 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I1117 23:34:29.980446   20985 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 23:34:30.010155   20985 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:738c671fde6982928afe934ef4be3ce0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I1117 23:34:37.407923   20985 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 23:34:37.408013   20985 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117233428-20973"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/json-events (6.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.752115082s)
--- PASS: TestDownloadOnly/v1.22.3/json-events (6.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/preload-exists
--- PASS: TestDownloadOnly/v1.22.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211117233428-20973
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211117233428-20973: exit status 85 (68.550354ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 23:34:39
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 23:34:39.068084   21021 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:34:39.068247   21021 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:39.068254   21021 out.go:310] Setting ErrFile to fd 2...
	I1117 23:34:39.068258   21021 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:39.068351   21021 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 23:34:39.068447   21021 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 23:34:39.068541   21021 out.go:304] Setting JSON to true
	I1117 23:34:39.103252   21021 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4641,"bootTime":1637187438,"procs":153,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1117 23:34:39.103332   21021 start.go:122] virtualization: kvm guest
	I1117 23:34:39.105794   21021 notify.go:174] Checking for updates...
	I1117 23:34:39.107948   21021 config.go:176] Loaded profile config "download-only-20211117233428-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W1117 23:34:39.108006   21021 start.go:683] api.Load failed for download-only-20211117233428-20973: filestore "download-only-20211117233428-20973": Docker machine "download-only-20211117233428-20973" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 23:34:39.108053   21021 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 23:34:39.108094   21021 start.go:683] api.Load failed for download-only-20211117233428-20973: filestore "download-only-20211117233428-20973": Docker machine "download-only-20211117233428-20973" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 23:34:39.136982   21021 start.go:280] selected driver: kvm2
	I1117 23:34:39.136996   21021 start.go:775] validating driver "kvm2" against &{Name:download-only-20211117233428-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211
117233428-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:34:39.137643   21021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:39.137796   21021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 23:34:39.148102   21021 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.24.0
	I1117 23:34:39.148701   21021 cni.go:93] Creating CNI manager for ""
	I1117 23:34:39.148717   21021 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1117 23:34:39.148727   21021 start_flags.go:282] config:
	{Name:download-only-20211117233428-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117233428-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:34:39.148821   21021 iso.go:123] acquiring lock: {Name:mk8cca007fc20acac1c2951039d04ddec7641ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:39.150943   21021 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime containerd
	I1117 23:34:39.171399   21021 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4
	I1117 23:34:39.171431   21021 cache.go:57] Caching tarball of preloaded images
	I1117 23:34:39.171578   21021 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime containerd
	I1117 23:34:39.173757   21021 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4 ...
	I1117 23:34:39.203794   21021 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:f268bd09384ee6265e34fce8eda0b1a6 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117233428-20973"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/json-events (8.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211117233428-20973 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.431363653s)
--- PASS: TestDownloadOnly/v1.22.4-rc.0/json-events (8.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211117233428-20973
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211117233428-20973: exit status 85 (72.312677ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 23:34:45
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.17.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 23:34:45.892897   21058 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:34:45.893084   21058 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:45.893096   21058 out.go:310] Setting ErrFile to fd 2...
	I1117 23:34:45.893101   21058 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:34:45.893230   21058 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	W1117 23:34:45.893356   21058 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/config/config.json: no such file or directory
	I1117 23:34:45.893480   21058 out.go:304] Setting JSON to true
	I1117 23:34:45.928021   21058 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4648,"bootTime":1637187438,"procs":153,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1117 23:34:45.928104   21058 start.go:122] virtualization: kvm guest
	I1117 23:34:45.930550   21058 notify.go:174] Checking for updates...
	I1117 23:34:45.932883   21058 config.go:176] Loaded profile config "download-only-20211117233428-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	W1117 23:34:45.932935   21058 start.go:683] api.Load failed for download-only-20211117233428-20973: filestore "download-only-20211117233428-20973": Docker machine "download-only-20211117233428-20973" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 23:34:45.932991   21058 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 23:34:45.933039   21058 start.go:683] api.Load failed for download-only-20211117233428-20973: filestore "download-only-20211117233428-20973": Docker machine "download-only-20211117233428-20973" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 23:34:45.963676   21058 start.go:280] selected driver: kvm2
	I1117 23:34:45.963696   21058 start.go:775] validating driver "kvm2" against &{Name:download-only-20211117233428-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211
117233428-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:34:45.964327   21058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:45.964476   21058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 23:34:45.974791   21058 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.24.0
	I1117 23:34:45.975459   21058 cni.go:93] Creating CNI manager for ""
	I1117 23:34:45.975475   21058 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I1117 23:34:45.975484   21058 start_flags.go:282] config:
	{Name:download-only-20211117233428-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:download-only-20211117233428-20973 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:34:45.975571   21058 iso.go:123] acquiring lock: {Name:mk8cca007fc20acac1c2951039d04ddec7641ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 23:34:45.977731   21058 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime containerd
	I1117 23:34:46.007571   21058 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-containerd-overlay2-amd64.tar.lz4
	I1117 23:34:46.007617   21058 cache.go:57] Caching tarball of preloaded images
	I1117 23:34:46.007759   21058 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime containerd
	I1117 23:34:46.010268   21058 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.4-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 23:34:46.039211   21058 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:fda906eb090f0ac021d6432aaa210271 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117233428-20973"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20211117233428-20973
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (93.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20211118001723-20973 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20211118001723-20973 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m32.824411534s)
helpers_test.go:175: Cleaning up "offline-containerd-20211118001723-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20211118001723-20973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20211118001723-20973: (1.085921855s)
--- PASS: TestOffline (93.91s)

                                                
                                    
x
+
TestAddons/Setup (152.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20211117233455-20973 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20211117233455-20973 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.831044413s)
--- PASS: TestAddons/Setup (152.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 28.160468ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-h9thg" [a7c3bf83-b0af-4539-9a9a-99fec46cc7e6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01986303s
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-8wnpt" [9c2c278b-f845-470a-a425-63772758564f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.021012011s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20211117233455-20973 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20211117233455-20973 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.631367121s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 ip
2021/11/17 23:37:42 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (45.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20211117233455-20973 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20211117233455-20973 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20211117233455-20973 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [bcd24377-217b-4d4a-a0e7-b761a37deede] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [bcd24377-217b-4d4a-a0e7-b761a37deede] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [bcd24377-217b-4d4a-a0e7-b761a37deede] Running
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.014160346s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20211117233455-20973 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:248: (dbg) Done: nslookup hello-john.test 192.168.39.50: (1.635208554s)
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable ingress-dns --alsologtostderr -v=1: (1.336227438s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable ingress --alsologtostderr -v=1: (31.163362367s)
--- PASS: TestAddons/parallel/Ingress (45.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 28.844405ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-77c99ccb96-m4mm8" [7770cffd-95ca-4c23-81d4-c726b37f9b56] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.025402017s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20211117233455-20973 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.07s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.32s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 30.715122ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-64b546c44b-ztdvc" [bb2f7333-ad22-4292-8d40-419d799af488] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.017480915s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20211117233455-20973 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20211117233455-20973 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.612325844s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (67.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20211117233455-20973 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s
addons_test.go:456: catalog-operator stabilized in 145.254719ms
addons_test.go:458: (dbg) Run:  kubectl --context addons-20211117233455-20973 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:461: olm-operator stabilized in 245.370801ms
addons_test.go:463: (dbg) Run:  kubectl --context addons-20211117233455-20973 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:466: packageserver stabilized in 385.152163ms
addons_test.go:468: (dbg) Run:  kubectl --context addons-20211117233455-20973 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:471: operatorhubio-catalog stabilized in 499.697815ms
addons_test.go:474: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/etcd.yaml
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117233455-20973 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117233455-20973 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117233455-20973 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117233455-20973 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211117233455-20973 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211117233455-20973 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (67.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 35.146078ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117233455-20973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117233455-20973 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [e5183525-b76d-4b9a-a69a-963fa1881ba9] Pending
helpers_test.go:342: "task-pv-pod" [e5183525-b76d-4b9a-a69a-963fa1881ba9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [e5183525-b76d-4b9a-a69a-963fa1881ba9] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.021794953s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211117233455-20973 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20211117233455-20973 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117233455-20973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20211117233455-20973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:567: (dbg) Done: kubectl --context addons-20211117233455-20973 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml: (1.124408177s)
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [8e36c552-581b-4c4a-a376-5f59dc959505] Pending
helpers_test.go:342: "task-pv-pod-restore" [8e36c552-581b-4c4a-a376-5f59dc959505] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [8e36c552-581b-4c4a-a376-5f59dc959505] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 17.016196957s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20211117233455-20973 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.357023081s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (42.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20211117233455-20973 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e5ab559d-9e25-49ab-a0d7-d241c67b2308] Pending
helpers_test.go:342: "busybox" [e5ab559d-9e25-49ab-a0d7-d241c67b2308] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e5ab559d-9e25-49ab-a0d7-d241c67b2308] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.012412137s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20211117233455-20973 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20211117233455-20973 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20211117233455-20973 addons disable gcp-auth --alsologtostderr -v=1: (7.111514157s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211117233455-20973 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20211117233455-20973 addons enable gcp-auth: (3.820277953s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20211117233455-20973 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7ff9c8c74f-jm7jh" [b1ffd8d7-34f4-435b-a8aa-e717621f7ba4] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7ff9c8c74f-jm7jh" [b1ffd8d7-34f4-435b-a8aa-e717621f7ba4] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.009825751s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20211117233455-20973 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-5956d58f9f-f7n9d" [3654131e-14e6-4be7-b6d7-085e01120c50] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-5956d58f9f-f7n9d" [3654131e-14e6-4be7-b6d7-085e01120c50] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.017566012s
--- PASS: TestAddons/serial/GCPAuth (42.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (93.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20211117233455-20973
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20211117233455-20973: (1m33.480418665s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20211117233455-20973
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20211117233455-20973
--- PASS: TestAddons/StoppedEnableDisable (93.65s)

                                                
                                    
x
+
TestCertOptions (82.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20211118002144-20973 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20211118002144-20973 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m20.440335599s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20211118002144-20973 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20211118002144-20973 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20211118002144-20973 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20211118002144-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20211118002144-20973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20211118002144-20973: (1.12125186s)
--- PASS: TestCertOptions (82.12s)

                                                
                                    
x
+
TestCertExpiration (260.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211118002119-20973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211118002119-20973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m4.991028076s)
E1118 00:22:27.856292   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211118002119-20973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211118002119-20973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (14.965577371s)
helpers_test.go:175: Cleaning up "cert-expiration-20211118002119-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20211118002119-20973
--- PASS: TestCertExpiration (260.95s)

                                                
                                    
x
+
TestForceSystemdFlag (74.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20211118002128-20973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20211118002128-20973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m13.327220896s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20211118002128-20973 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20211118002128-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20211118002128-20973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20211118002128-20973: (1.21272512s)
--- PASS: TestForceSystemdFlag (74.76s)

                                                
                                    
x
+
TestForceSystemdEnv (85.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20211118001723-20973 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20211118001723-20973 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m23.785163762s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20211118001723-20973 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20211118001723-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20211118001723-20973
--- PASS: TestForceSystemdEnv (85.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.92s)

                                                
                                    
x
+
TestErrorSpam/setup (56.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20211117234058-20973 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211117234058-20973 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20211117234058-20973 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211117234058-20973 --driver=kvm2  --container-runtime=containerd: (56.564176945s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.3."
--- PASS: TestErrorSpam/setup (56.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 start --dry-run
--- PASS: TestErrorSpam/start (0.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (3.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 pause
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 pause: (2.548212095s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 pause
--- PASS: TestErrorSpam/pause (3.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 stop: (5.10213376s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211117234058-20973 --log_dir /tmp/nospam-20211117234058-20973 stop
--- PASS: TestErrorSpam/stop (5.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/files/etc/test/nested/copy/20973/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1117 23:42:27.859424   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:27.865431   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:27.875699   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:27.895993   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:27.936319   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:28.016666   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:28.177355   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:28.497727   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:29.138648   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:30.419243   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:32.980185   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:38.101244   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:42:48.613305   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:43:09.094291   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
functional_test.go:2015: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211117234207-20973 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m24.550080452s)
--- PASS: TestFunctional/serial/StartWithProxy (84.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --alsologtostderr -v=8
E1117 23:43:50.055050   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
functional_test.go:600: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211117234207-20973 --alsologtostderr -v=8: (27.128913271s)
functional_test.go:604: soft start took 27.129484201s for "functional-20211117234207-20973" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211117234207-20973 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211117234207-20973 /tmp/functional-20211117234207-209732951385836
functional_test.go:1026: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add minikube-local-cache-test:functional-20211117234207-20973
functional_test.go:1026: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 cache add minikube-local-cache-test:functional-20211117234207-20973: (1.537898278s)
functional_test.go:1031: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cache delete minikube-local-cache-test:functional-20211117234207-20973
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211117234207-20973
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 kubectl -- --context functional-20211117234207-20973 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out/kubectl --context functional-20211117234207-20973 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:698: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211117234207-20973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.600040079s)
functional_test.go:702: restart took 37.600148551s for "functional-20211117234207-20973" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211117234207-20973 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:767: etcd phase: Running
functional_test.go:777: etcd status: Ready
functional_test.go:767: kube-apiserver phase: Running
functional_test.go:777: kube-apiserver status: Ready
functional_test.go:767: kube-controller-manager phase: Running
functional_test.go:777: kube-controller-manager status: Ready
functional_test.go:767: kube-scheduler phase: Running
functional_test.go:777: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 logs
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 logs: (1.382190963s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 logs --file /tmp/functional-20211117234207-209733852987025/logs.txt
functional_test.go:1190: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 logs --file /tmp/functional-20211117234207-209733852987025/logs.txt: (1.458035761s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 config get cpus: exit status 14 (78.118322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 config get cpus: exit status 14 (71.787492ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211117234207-20973 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:852: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211117234207-20973 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 25693: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:912: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211117234207-20973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (153.98761ms)

                                                
                                                
-- stdout --
	* [functional-20211117234207-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:45:08.746143   25547 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:45:08.746299   25547 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:45:08.746307   25547 out.go:310] Setting ErrFile to fd 2...
	I1117 23:45:08.746311   25547 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:45:08.746413   25547 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 23:45:08.746627   25547 out.go:304] Setting JSON to false
	I1117 23:45:08.780834   25547 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5271,"bootTime":1637187438,"procs":154,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1117 23:45:08.780906   25547 start.go:122] virtualization: kvm guest
	I1117 23:45:08.783623   25547 out.go:176] * [functional-20211117234207-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	I1117 23:45:08.785231   25547 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 23:45:08.786880   25547 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 23:45:08.788411   25547 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 23:45:08.790002   25547 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:45:08.790386   25547 config.go:176] Loaded profile config "functional-20211117234207-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1117 23:45:08.790768   25547 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:45:08.790820   25547 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:45:08.801585   25547 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43835
	I1117 23:45:08.802015   25547 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:45:08.802628   25547 main.go:130] libmachine: Using API Version  1
	I1117 23:45:08.802649   25547 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:45:08.803036   25547 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:45:08.803258   25547 main.go:130] libmachine: (functional-20211117234207-20973) Calling .DriverName
	I1117 23:45:08.803472   25547 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:45:08.803815   25547 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:45:08.803852   25547 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:45:08.813884   25547 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1117 23:45:08.814289   25547 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:45:08.814660   25547 main.go:130] libmachine: Using API Version  1
	I1117 23:45:08.814678   25547 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:45:08.814983   25547 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:45:08.815164   25547 main.go:130] libmachine: (functional-20211117234207-20973) Calling .DriverName
	I1117 23:45:08.842485   25547 out.go:176] * Using the kvm2 driver based on existing profile
	I1117 23:45:08.842510   25547 start.go:280] selected driver: kvm2
	I1117 23:45:08.842515   25547 start.go:775] validating driver "kvm2" against &{Name:functional-20211117234207-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117234
207-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:45:08.842665   25547 start.go:786] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:45:08.845274   25547 out.go:176] 
	W1117 23:45:08.845383   25547 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 23:45:08.846984   25547 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211117234207-20973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211117234207-20973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (196.64536ms)

                                                
                                                
-- stdout --
	* [functional-20211117234207-20973] minikube v1.24.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_LOCATION=12739
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:44:47.534332   24801 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:44:47.534464   24801 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:44:47.534474   24801 out.go:310] Setting ErrFile to fd 2...
	I1117 23:44:47.534481   24801 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:44:47.534679   24801 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 23:44:47.534938   24801 out.go:304] Setting JSON to false
	I1117 23:44:47.578967   24801 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5250,"bootTime":1637187438,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1117 23:44:47.579085   24801 start.go:122] virtualization: kvm guest
	I1117 23:44:47.581219   24801 out.go:176] * [functional-20211117234207-20973] minikube v1.24.0 sur Debian 9.13 (kvm/amd64)
	I1117 23:44:47.582935   24801 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1117 23:44:47.584538   24801 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 23:44:47.586346   24801 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1117 23:44:47.588061   24801 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 23:44:47.588567   24801 config.go:176] Loaded profile config "functional-20211117234207-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1117 23:44:47.589113   24801 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:44:47.589176   24801 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:44:47.602907   24801 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41875
	I1117 23:44:47.606377   24801 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:44:47.606932   24801 main.go:130] libmachine: Using API Version  1
	I1117 23:44:47.606958   24801 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:44:47.607456   24801 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:44:47.607698   24801 main.go:130] libmachine: (functional-20211117234207-20973) Calling .DriverName
	I1117 23:44:47.607931   24801 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 23:44:47.608416   24801 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:44:47.608466   24801 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:44:47.621669   24801 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38571
	I1117 23:44:47.622037   24801 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:44:47.622482   24801 main.go:130] libmachine: Using API Version  1
	I1117 23:44:47.622501   24801 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:44:47.622821   24801 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:44:47.623020   24801 main.go:130] libmachine: (functional-20211117234207-20973) Calling .DriverName
	I1117 23:44:47.655521   24801 out.go:176] * Utilisation du pilote kvm2 basé sur le profil existant
	I1117 23:44:47.655550   24801 start.go:280] selected driver: kvm2
	I1117 23:44:47.655557   24801 start.go:775] validating driver "kvm2" against &{Name:functional-20211117234207-20973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.24.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117234
207-20973 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host}
	I1117 23:44:47.655695   24801 start.go:786] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 23:44:47.658463   24801 out.go:176] 
	W1117 23:44:47.658586   24801 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 23:44:47.660137   24801 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 status
functional_test.go:802: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:814: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (21.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211117234207-20973 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1378: (dbg) Run:  kubectl --context functional-20211117234207-20973 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-6cbfcd7cbc-sjd9x" [25ca7972-e2dd-4ebb-a96b-a37398c9e4b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-6cbfcd7cbc-sjd9x" [25ca7972-e2dd-4ebb-a96b-a37398c9e4b6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1383: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 19.03935931s
functional_test.go:1388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1401: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 service --namespace=default --https --url hello-node
functional_test.go:1410: found endpoint: https://192.168.39.239:30156
functional_test.go:1421: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1430: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1436: found endpoint for hello-node: http://192.168.39.239:30156
functional_test.go:1447: Attempting to fetch http://192.168.39.239:30156 ...
functional_test.go:1467: http://192.168.39.239:30156: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-sjd9x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.239:30156
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (21.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 addons list
functional_test.go:1494: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [98786ece-b55a-4722-8eaa-8d0a37192df2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.031848663s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20211117234207-20973 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20211117234207-20973 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211117234207-20973 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211117234207-20973 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211117234207-20973 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5b809f6d-3540-4626-b260-519c15a5b66a] Pending
helpers_test.go:342: "sp-pod" [5b809f6d-3540-4626-b260-519c15a5b66a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5b809f6d-3540-4626-b260-519c15a5b66a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.011099507s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20211117234207-20973 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20211117234207-20973 delete -f testdata/storage-provisioner/pod.yaml: (1.04214065s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211117234207-20973 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [7fbc19a8-b8b0-4e96-8695-1534a0844423] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [7fbc19a8-b8b0-4e96-8695-1534a0844423] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [7fbc19a8-b8b0-4e96-8695-1534a0844423] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.019348672s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211117234207-20973 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-9bbbc5bbb-dff46" [e5e836f5-8cc0-4dd8-b77f-c75e01226bf0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-9bbbc5bbb-dff46" [e5e836f5-8cc0-4dd8-b77f-c75e01226bf0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1577: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.05378388s
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;": exit status 1 (315.712338ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;": exit status 1 (227.513885ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;"
functional_test.go:1585: (dbg) Non-zero exit: kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;": exit status 1 (319.782008ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1585: (dbg) Run:  kubectl --context functional-20211117234207-20973 exec mysql-9bbbc5bbb-dff46 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/20973/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /etc/test/nested/copy/20973/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1714: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/20973.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /etc/ssl/certs/20973.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/20973.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /usr/share/ca-certificates/20973.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1777: Checking for existence of /etc/ssl/certs/209732.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /etc/ssl/certs/209732.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/209732.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /usr/share/ca-certificates/209732.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211117234207-20973 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo systemctl is-active docker": exit status 1 (267.567898ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1805: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo systemctl is-active crio": exit status 1 (277.713797ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 version -o=json --components
2021/11/17 23:45:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1218: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1258: Took "316.17911ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1272: Took "78.5916ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20211117234207-20973 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20211117234207-20973 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [07e07c36-558f-4d93-8a8b-3b51f9626899] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [07e07c36-558f-4d93-8a8b-3b51f9626899] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.039031453s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1309: Took "444.899924ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1322: Took "60.272401ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:226: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20211117234207-20973 /tmp/mounttest2768735396:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.284099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:270: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh -- ls -la /mount-9p
functional_test_mount_test.go:274: guest mount directory contents
total 0
functional_test_mount_test.go:276: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211117234207-20973 /tmp/mounttest2768735396:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:277: reading mount text
functional_test_mount_test.go:291: done reading mount text
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh "sudo umount -f /mount-9p": exit status 1 (208.639051ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:245: "out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:247: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211117234207-20973 /tmp/mounttest2768735396:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20211117234207-20973 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.103.193.170 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20211117234207-20973 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:246: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls:
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20211117234207-20973
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageList (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh pgrep buildkitd
functional_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211117234207-20973 ssh pgrep buildkitd: exit status 1 (207.438797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image build -t localhost/my-image:functional-20211117234207-20973 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 image build -t localhost/my-image:functional-20211117234207-20973 testdata/build: (3.304896946s)
functional_test.go:279: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20211117234207-20973 image build -t localhost/my-image:functional-20211117234207-20973 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 0.8s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.1s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:e7157b6d7ebbe2cce5eaa8cfe8aa4fa82d173999b9f90a9ec42e57323546c353
#4 resolve docker.io/library/busybox@sha256:e7157b6d7ebbe2cce5eaa8cfe8aa4fa82d173999b9f90a9ec42e57323546c353 0.1s done
#4 sha256:e685c5c858e36338a47c627763b50dfe6035b547f1f75f0d39753db71e319016 772.79kB / 772.79kB 0.2s
#4 DONE 0.2s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:e7157b6d7ebbe2cce5eaa8cfe8aa4fa82d173999b9f90a9ec42e57323546c353
#4 extracting sha256:e685c5c858e36338a47c627763b50dfe6035b547f1f75f0d39753db71e319016 0.1s done
#4 DONE 0.3s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.4s done
#8 exporting manifest sha256:e7e7a3f7e62d63aa726b125e837ad14723ddb61fade3f62044c3bdb34f401ab0 0.0s done
#8 exporting config sha256:2694e6cb39cc615c4a639b16dddd53496dff702ab7500db8c60a379a4d9468f3
#8 exporting config sha256:2694e6cb39cc615c4a639b16dddd53496dff702ab7500db8c60a379a4d9468f3 0.0s done
#8 naming to localhost/my-image:functional-20211117234207-20973 done
#8 DONE 0.5s
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E1117 23:45:11.975963   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117234207-20973

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117234207-20973: (4.18536292s)
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image save gcr.io/google-containers/addon-resizer:functional-20211117234207-20973 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 image save gcr.io/google-containers/addon-resizer:functional-20211117234207-20973 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.898493542s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image rm gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.545332926s)
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211117234207-20973 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-20211117234207-20973 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117234207-20973: (1.682949282s)
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211117234207-20973
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211117234207-20973
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211117234207-20973
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20211117234527-20973 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20211117234527-20973 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m16.835215227s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (76.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons enable ingress --alsologtostderr -v=5: (11.833150297s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (60.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117234527-20973 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211117234527-20973 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.390396872s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117234527-20973 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117234527-20973 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [b425ce2f-74d5-4601-b680-14e7edde3b0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [b425ce2f-74d5-4601-b680-14e7edde3b0a] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.013487801s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20211117234527-20973 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.39.148
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons disable ingress-dns --alsologtostderr -v=1
E1117 23:47:27.855858   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons disable ingress-dns --alsologtostderr -v=1: (1.848317498s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons disable ingress --alsologtostderr -v=1
E1117 23:47:55.816223   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211117234527-20973 addons disable ingress --alsologtostderr -v=1: (28.783561804s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (60.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20211117234758-20973 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20211117234758-20973 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m19.301529701s)
--- PASS: TestJSONOutput/start/Command (79.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20211117234758-20973 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20211117234758-20973 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20211117234758-20973 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20211117234758-20973 --output=json --user=testUser: (2.098147723s)
--- PASS: TestJSONOutput/stop/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20211117234921-20973 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20211117234921-20973 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.886943ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de071328-7dab-4c27-a1a6-38e6d8b196a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211117234921-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4db823d-58cd-4e69-bbf8-5d4d3f05c9ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig"}}
	{"specversion":"1.0","id":"8644f862-fdea-4ecc-b00d-e098c43a4422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83c0a91a-dd8b-49ac-8559-787b2fa907bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube"}}
	{"specversion":"1.0","id":"9bee6e44-9a2a-4d49-bfdb-66dfac1ab995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"5ab776f9-7de7-4bfb-b0d0-9e8f9c67270a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211117234921-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20211117234921-20973
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (56.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20211117234922-20973 --memory=2048 --mount --driver=kvm2  --container-runtime=containerd
E1117 23:49:47.651213   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.656531   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.666760   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.687040   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.727332   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.807650   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:47.968145   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:48.288892   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:48.929915   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:50.210390   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:52.772165   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:49:57.892957   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:50:08.133445   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
mount_start_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20211117234922-20973 --memory=2048 --mount --driver=kvm2  --container-runtime=containerd: (56.0344991s)
--- PASS: TestMountStart/serial/StartWithMountFirst (56.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (57.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211117234922-20973 --memory=2048 --mount --driver=kvm2  --container-runtime=containerd
E1117 23:50:28.613948   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:51:09.575405   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
mount_start_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211117234922-20973 --memory=2048 --mount --driver=kvm2  --container-runtime=containerd: (57.492662141s)
--- PASS: TestMountStart/serial/StartWithMountSecond (57.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20211117234922-20973 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.2s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211117234922-20973 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.20s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.99s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20211117234922-20973 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.2s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211117234922-20973 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.20s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20211117234922-20973
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20211117234922-20973: (2.298240949s)
--- PASS: TestMountStart/serial/Stop (2.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (87.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211117234922-20973
E1117 23:51:57.022949   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.028219   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.038428   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.058665   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.098892   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.179194   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.339604   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:57.660280   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:58.300509   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:51:59.581549   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:52:02.142322   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:52:07.262574   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:52:17.502917   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:52:27.855772   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:52:31.498884   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1117 23:52:37.983887   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
mount_start_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211117234922-20973: (1m27.311097912s)
--- PASS: TestMountStart/serial/RestartStopped (87.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211117234922-20973 ssh ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1117 23:53:18.944538   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:54:40.865582   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:54:47.651252   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
multinode_test.go:82: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m17.856059845s)
multinode_test.go:88: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:468: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- rollout status deployment/busybox: (3.483299808s)
multinode_test.go:474: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-d8r6q -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-hv67q -- nslookup kubernetes.io
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-d8r6q -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-hv67q -- nslookup kubernetes.default
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-d8r6q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-hv67q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-d8r6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-d8r6q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:530: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-hv67q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211117235248-20973 -- exec busybox-84b6686758-hv67q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211117235248-20973 -v 3 --alsologtostderr
E1117 23:55:15.339592   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
multinode_test.go:107: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20211117235248-20973 -v 3 --alsologtostderr: (52.779466182s)
multinode_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.38s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --output json --alsologtostderr
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 cp testdata/cp-test.txt multinode-20211117235248-20973-m02:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 ssh -n multinode-20211117235248-20973-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 cp testdata/cp-test.txt multinode-20211117235248-20973-m03:/home/docker/cp-test.txt
helpers_test.go:548: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 ssh -n multinode-20211117235248-20973-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211117235248-20973 node stop m03: (2.099015628s)
multinode_test.go:198: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211117235248-20973 status: exit status 7 (426.362325ms)

                                                
                                                
-- stdout --
	multinode-20211117235248-20973
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211117235248-20973-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211117235248-20973-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr: exit status 7 (429.645313ms)

                                                
                                                
-- stdout --
	multinode-20211117235248-20973
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211117235248-20973-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211117235248-20973-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 23:56:11.333975   30206 out.go:297] Setting OutFile to fd 1 ...
	I1117 23:56:11.334063   30206 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:56:11.334072   30206 out.go:310] Setting ErrFile to fd 2...
	I1117 23:56:11.334076   30206 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 23:56:11.334171   30206 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1117 23:56:11.334316   30206 out.go:304] Setting JSON to false
	I1117 23:56:11.334329   30206 mustload.go:65] Loading cluster: multinode-20211117235248-20973
	I1117 23:56:11.334592   30206 config.go:176] Loaded profile config "multinode-20211117235248-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1117 23:56:11.334606   30206 status.go:253] checking status of multinode-20211117235248-20973 ...
	I1117 23:56:11.334926   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.334965   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.346805   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37101
	I1117 23:56:11.347335   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.347843   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.347864   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.348250   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.348485   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetState
	I1117 23:56:11.351664   30206 status.go:328] multinode-20211117235248-20973 host status = "Running" (err=<nil>)
	I1117 23:56:11.351681   30206 host.go:66] Checking if "multinode-20211117235248-20973" exists ...
	I1117 23:56:11.352019   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.352058   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.363071   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42307
	I1117 23:56:11.363475   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.363891   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.363911   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.364243   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.364445   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetIP
	I1117 23:56:11.370070   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | domain multinode-20211117235248-20973 has defined MAC address 52:54:00:52:49:98 in network mk-multinode-20211117235248-20973
	I1117 23:56:11.370453   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:49:98", ip: ""} in network mk-multinode-20211117235248-20973: {Iface:virbr1 ExpiryTime:2021-11-18 00:53:01 +0000 UTC Type:0 Mac:52:54:00:52:49:98 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-20211117235248-20973 Clientid:01:52:54:00:52:49:98}
	I1117 23:56:11.370485   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | domain multinode-20211117235248-20973 has defined IP address 192.168.39.105 and MAC address 52:54:00:52:49:98 in network mk-multinode-20211117235248-20973
	I1117 23:56:11.370576   30206 host.go:66] Checking if "multinode-20211117235248-20973" exists ...
	I1117 23:56:11.370859   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.370889   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.380815   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41049
	I1117 23:56:11.381182   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.381527   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.381549   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.381868   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.382050   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .DriverName
	I1117 23:56:11.382233   30206 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:56:11.382276   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetSSHHostname
	I1117 23:56:11.386904   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | domain multinode-20211117235248-20973 has defined MAC address 52:54:00:52:49:98 in network mk-multinode-20211117235248-20973
	I1117 23:56:11.387301   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:49:98", ip: ""} in network mk-multinode-20211117235248-20973: {Iface:virbr1 ExpiryTime:2021-11-18 00:53:01 +0000 UTC Type:0 Mac:52:54:00:52:49:98 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-20211117235248-20973 Clientid:01:52:54:00:52:49:98}
	I1117 23:56:11.387345   30206 main.go:130] libmachine: (multinode-20211117235248-20973) DBG | domain multinode-20211117235248-20973 has defined IP address 192.168.39.105 and MAC address 52:54:00:52:49:98 in network mk-multinode-20211117235248-20973
	I1117 23:56:11.387466   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetSSHPort
	I1117 23:56:11.387610   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetSSHKeyPath
	I1117 23:56:11.387745   30206 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetSSHUsername
	I1117 23:56:11.387881   30206 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/multinode-20211117235248-20973/id_rsa Username:docker}
	I1117 23:56:11.473001   30206 ssh_runner.go:152] Run: systemctl --version
	I1117 23:56:11.479111   30206 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1117 23:56:11.494997   30206 kubeconfig.go:92] found "multinode-20211117235248-20973" server: "https://192.168.39.105:8443"
	I1117 23:56:11.495118   30206 api_server.go:165] Checking apiserver status ...
	I1117 23:56:11.495298   30206 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 23:56:11.508510   30206 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2648/cgroup
	I1117 23:56:11.518729   30206 api_server.go:181] apiserver freezer: "10:freezer:/kubepods/burstable/pode963c6f2a8c5ec4e27e3fa2793663cb8/bfe14aa1093b02f40282aec83b6ea1c6e7c8b88b98f493baae38e347dc88f82d"
	I1117 23:56:11.518784   30206 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pode963c6f2a8c5ec4e27e3fa2793663cb8/bfe14aa1093b02f40282aec83b6ea1c6e7c8b88b98f493baae38e347dc88f82d/freezer.state
	I1117 23:56:11.528604   30206 api_server.go:203] freezer state: "THAWED"
	I1117 23:56:11.528626   30206 api_server.go:240] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I1117 23:56:11.534597   30206 api_server.go:266] https://192.168.39.105:8443/healthz returned 200:
	ok
	I1117 23:56:11.534615   30206 status.go:419] multinode-20211117235248-20973 apiserver status = Running (err=<nil>)
	I1117 23:56:11.534625   30206 status.go:255] multinode-20211117235248-20973 status: &{Name:multinode-20211117235248-20973 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 23:56:11.534643   30206 status.go:253] checking status of multinode-20211117235248-20973-m02 ...
	I1117 23:56:11.534975   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.535012   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.545800   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45943
	I1117 23:56:11.546165   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.546612   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.546635   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.547009   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.547173   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetState
	I1117 23:56:11.550375   30206 status.go:328] multinode-20211117235248-20973-m02 host status = "Running" (err=<nil>)
	I1117 23:56:11.550391   30206 host.go:66] Checking if "multinode-20211117235248-20973-m02" exists ...
	I1117 23:56:11.550718   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.550752   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.561760   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35751
	I1117 23:56:11.562174   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.562611   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.562635   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.562962   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.563178   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetIP
	I1117 23:56:11.569000   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | domain multinode-20211117235248-20973-m02 has defined MAC address 52:54:00:2c:fc:4c in network mk-multinode-20211117235248-20973
	I1117 23:56:11.569471   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:fc:4c", ip: ""} in network mk-multinode-20211117235248-20973: {Iface:virbr1 ExpiryTime:2021-11-18 00:54:27 +0000 UTC Type:0 Mac:52:54:00:2c:fc:4c Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:multinode-20211117235248-20973-m02 Clientid:01:52:54:00:2c:fc:4c}
	I1117 23:56:11.569500   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | domain multinode-20211117235248-20973-m02 has defined IP address 192.168.39.151 and MAC address 52:54:00:2c:fc:4c in network mk-multinode-20211117235248-20973
	I1117 23:56:11.569608   30206 host.go:66] Checking if "multinode-20211117235248-20973-m02" exists ...
	I1117 23:56:11.569971   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.570012   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.580736   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44795
	I1117 23:56:11.581175   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.581640   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.581663   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.581967   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.582141   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .DriverName
	I1117 23:56:11.582314   30206 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 23:56:11.582335   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetSSHHostname
	I1117 23:56:11.587373   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | domain multinode-20211117235248-20973-m02 has defined MAC address 52:54:00:2c:fc:4c in network mk-multinode-20211117235248-20973
	I1117 23:56:11.587738   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:fc:4c", ip: ""} in network mk-multinode-20211117235248-20973: {Iface:virbr1 ExpiryTime:2021-11-18 00:54:27 +0000 UTC Type:0 Mac:52:54:00:2c:fc:4c Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:multinode-20211117235248-20973-m02 Clientid:01:52:54:00:2c:fc:4c}
	I1117 23:56:11.587779   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) DBG | domain multinode-20211117235248-20973-m02 has defined IP address 192.168.39.151 and MAC address 52:54:00:2c:fc:4c in network mk-multinode-20211117235248-20973
	I1117 23:56:11.587932   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetSSHPort
	I1117 23:56:11.588105   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetSSHKeyPath
	I1117 23:56:11.588256   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetSSHUsername
	I1117 23:56:11.588395   30206 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/machines/multinode-20211117235248-20973-m02/id_rsa Username:docker}
	I1117 23:56:11.679609   30206 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I1117 23:56:11.694657   30206 status.go:255] multinode-20211117235248-20973-m02 status: &{Name:multinode-20211117235248-20973-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1117 23:56:11.694696   30206 status.go:253] checking status of multinode-20211117235248-20973-m03 ...
	I1117 23:56:11.695049   30206 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 23:56:11.695089   30206 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1117 23:56:11.705913   30206 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33031
	I1117 23:56:11.706358   30206 main.go:130] libmachine: () Calling .GetVersion
	I1117 23:56:11.706809   30206 main.go:130] libmachine: Using API Version  1
	I1117 23:56:11.706829   30206 main.go:130] libmachine: () Calling .SetConfigRaw
	I1117 23:56:11.707203   30206 main.go:130] libmachine: () Calling .GetMachineName
	I1117 23:56:11.707449   30206 main.go:130] libmachine: (multinode-20211117235248-20973-m03) Calling .GetState
	I1117 23:56:11.710489   30206 status.go:328] multinode-20211117235248-20973-m03 host status = "Stopped" (err=<nil>)
	I1117 23:56:11.710505   30206 status.go:341] host is not running, skipping remaining checks
	I1117 23:56:11.710512   30206 status.go:255] multinode-20211117235248-20973-m03 status: &{Name:multinode-20211117235248-20973-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 node start m03 --alsologtostderr
E1117 23:56:57.020172   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
multinode_test.go:236: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211117235248-20973 node start m03 --alsologtostderr: (49.140178629s)
multinode_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (49.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (509.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211117235248-20973
multinode_test.go:272: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20211117235248-20973
E1117 23:57:24.707151   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1117 23:57:27.856158   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:58:51.177471   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1117 23:59:47.650695   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
multinode_test.go:272: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20211117235248-20973: (3m6.250081938s)
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true -v=8 --alsologtostderr
E1118 00:01:57.019625   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1118 00:02:27.856018   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1118 00:04:47.651585   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
multinode_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true -v=8 --alsologtostderr: (5m22.82198602s)
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211117235248-20973
--- PASS: TestMultiNode/serial/RestartKeepsNodes (509.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 node delete m03
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211117235248-20973 node delete m03: (1.577603012s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 stop
E1118 00:06:10.701977   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1118 00:06:57.020220   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1118 00:07:27.855849   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1118 00:08:20.068532   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
multinode_test.go:296: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211117235248-20973 stop: (3m4.218029137s)
multinode_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211117235248-20973 status: exit status 7 (82.210475ms)

                                                
                                                
-- stdout --
	multinode-20211117235248-20973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211117235248-20973-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr: exit status 7 (82.161922ms)

                                                
                                                
-- stdout --
	multinode-20211117235248-20973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211117235248-20973-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1118 00:08:37.290032   31406 out.go:297] Setting OutFile to fd 1 ...
	I1118 00:08:37.290149   31406 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:08:37.290160   31406 out.go:310] Setting ErrFile to fd 2...
	I1118 00:08:37.290165   31406 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:08:37.290287   31406 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1118 00:08:37.290484   31406 out.go:304] Setting JSON to false
	I1118 00:08:37.290502   31406 mustload.go:65] Loading cluster: multinode-20211117235248-20973
	I1118 00:08:37.290877   31406 config.go:176] Loaded profile config "multinode-20211117235248-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1118 00:08:37.290893   31406 status.go:253] checking status of multinode-20211117235248-20973 ...
	I1118 00:08:37.291292   31406 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:08:37.291339   31406 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:08:37.301591   31406 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44563
	I1118 00:08:37.302063   31406 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:08:37.302617   31406 main.go:130] libmachine: Using API Version  1
	I1118 00:08:37.302639   31406 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:08:37.303035   31406 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:08:37.303233   31406 main.go:130] libmachine: (multinode-20211117235248-20973) Calling .GetState
	I1118 00:08:37.306158   31406 status.go:328] multinode-20211117235248-20973 host status = "Stopped" (err=<nil>)
	I1118 00:08:37.306174   31406 status.go:341] host is not running, skipping remaining checks
	I1118 00:08:37.306179   31406 status.go:255] multinode-20211117235248-20973 status: &{Name:multinode-20211117235248-20973 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1118 00:08:37.306201   31406 status.go:253] checking status of multinode-20211117235248-20973-m02 ...
	I1118 00:08:37.306473   31406 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1118 00:08:37.306530   31406 main.go:130] libmachine: Launching plugin server for driver kvm2
	I1118 00:08:37.316334   31406 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39937
	I1118 00:08:37.316672   31406 main.go:130] libmachine: () Calling .GetVersion
	I1118 00:08:37.317052   31406 main.go:130] libmachine: Using API Version  1
	I1118 00:08:37.317071   31406 main.go:130] libmachine: () Calling .SetConfigRaw
	I1118 00:08:37.317356   31406 main.go:130] libmachine: () Calling .GetMachineName
	I1118 00:08:37.317527   31406 main.go:130] libmachine: (multinode-20211117235248-20973-m02) Calling .GetState
	I1118 00:08:37.320100   31406 status.go:328] multinode-20211117235248-20973-m02 host status = "Stopped" (err=<nil>)
	I1118 00:08:37.320115   31406 status.go:341] host is not running, skipping remaining checks
	I1118 00:08:37.320120   31406 status.go:255] multinode-20211117235248-20973-m02 status: &{Name:multinode-20211117235248-20973-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (216.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1118 00:09:47.651533   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1118 00:11:57.020103   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
multinode_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211117235248-20973 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m35.724982847s)
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211117235248-20973 status --alsologtostderr
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (216.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (60s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211117235248-20973
multinode_test.go:434: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211117235248-20973-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20211117235248-20973-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (99.730463ms)

                                                
                                                
-- stdout --
	* [multinode-20211117235248-20973-m02] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_LOCATION=12739
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20211117235248-20973-m02' is duplicated with machine name 'multinode-20211117235248-20973-m02' in profile 'multinode-20211117235248-20973'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211117235248-20973-m03 --driver=kvm2  --container-runtime=containerd
E1118 00:12:27.856539   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
multinode_test.go:442: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211117235248-20973-m03 --driver=kvm2  --container-runtime=containerd: (58.593078433s)
multinode_test.go:449: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211117235248-20973
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20211117235248-20973: exit status 80 (251.506964ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20211117235248-20973
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20211117235248-20973-m03 already exists in multinode-20211117235248-20973-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20211117235248-20973-m03
multinode_test.go:454: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20211117235248-20973-m03: (1.005001686s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (60.00s)

                                                
                                    
x
+
TestPreload (119.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211118001315-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211118001315-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m24.59897971s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211118001315-20973 -- sudo crictl pull busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20211118001315-20973 -- sudo crictl pull busybox: (1.471555158s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211118001315-20973 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3
E1118 00:14:47.650872   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211118001315-20973 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3: (31.777104336s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211118001315-20973 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20211118001315-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20211118001315-20973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20211118001315-20973: (1.071232088s)
--- PASS: TestPreload (119.15s)

                                                
                                    
x
+
TestScheduledStopUnix (128.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20211118001514-20973 --memory=2048 --driver=kvm2  --container-runtime=containerd
E1118 00:15:31.179924   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20211118001514-20973 --memory=2048 --driver=kvm2  --container-runtime=containerd: (56.826190061s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211118001514-20973 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20211118001514-20973 -n scheduled-stop-20211118001514-20973
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211118001514-20973 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211118001514-20973 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211118001514-20973 -n scheduled-stop-20211118001514-20973
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211118001514-20973
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211118001514-20973 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E1118 00:16:57.023178   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211118001514-20973
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20211118001514-20973: exit status 7 (71.355388ms)

                                                
                                                
-- stdout --
	scheduled-stop-20211118001514-20973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211118001514-20973 -n scheduled-stop-20211118001514-20973
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211118001514-20973 -n scheduled-stop-20211118001514-20973: exit status 7 (66.352973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20211118001514-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20211118001514-20973
--- PASS: TestScheduledStopUnix (128.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (235.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.3212256092.exe start -p running-upgrade-20211118001723-20973 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E1118 00:17:27.856190   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.3212256092.exe start -p running-upgrade-20211118001723-20973 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (3m9.01039968s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20211118001723-20973 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20211118001723-20973 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (43.850112984s)
helpers_test.go:175: Cleaning up "running-upgrade-20211118001723-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20211118001723-20973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20211118001723-20973: (1.304296511s)
--- PASS: TestRunningBinaryUpgrade (235.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (237.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m38.727919845s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211118001853-20973

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211118001853-20973: (2.120417027s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20211118001853-20973 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20211118001853-20973 status --format={{.Host}}: exit status 7 (81.310996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m21.570169377s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20211118001853-20973 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (154.509251ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211118001853-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_LOCATION=12739
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.4-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20211118001853-20973
	    minikube start -p kubernetes-upgrade-20211118001853-20973 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211118001853-209732 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.4-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211118001853-20973 --kubernetes-version=v1.22.4-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1118 00:21:57.019485   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211118001853-20973 --memory=2200 --kubernetes-version=v1.22.4-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (53.650845567s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211118001853-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211118001853-20973

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211118001853-20973: (1.471621407s)
--- PASS: TestKubernetesUpgrade (237.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211118001723-20973 --no-kubernetes --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211118001723-20973 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (55.363749523s)
--- PASS: TestNoKubernetes/serial/Start (55.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20211118001723-20973 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20211118001723-20973 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.74765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:121: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:100: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20211118001723-20973
no_kubernetes_test.go:100: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20211118001723-20973: (2.114734082s)
--- PASS: TestNoKubernetes/serial/Stop (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211118001723-20973 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211118001723-20973 --driver=kvm2  --container-runtime=containerd: (28.377391296s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.38s)

                                                
                                    
x
+
TestPause/serial/Start (106.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211118001848-20973 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211118001848-20973 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m46.48004992s)
--- PASS: TestPause/serial/Start (106.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20211118001723-20973 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20211118001723-20973 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.986052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (163.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1062025562.exe start -p stopped-upgrade-20211118001857-20973 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E1118 00:19:47.651376   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1062025562.exe start -p stopped-upgrade-20211118001857-20973 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m44.390920091s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1062025562.exe -p stopped-upgrade-20211118001857-20973 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1062025562.exe -p stopped-upgrade-20211118001857-20973 stop: (2.192497944s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20211118001857-20973 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20211118001857-20973 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.701716289s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (163.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211118001848-20973 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211118001848-20973 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.095835901s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-linux-amd64 start -p false-20211118002119-20973 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20211118002119-20973 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (146.052153ms)

                                                
                                                
-- stdout --
	* [false-20211118002119-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	  - MINIKUBE_LOCATION=12739
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1118 00:21:19.145198    4399 out.go:297] Setting OutFile to fd 1 ...
	I1118 00:21:19.145350    4399 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:21:19.145360    4399 out.go:310] Setting ErrFile to fd 2...
	I1118 00:21:19.145367    4399 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1118 00:21:19.145589    4399 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/bin
	I1118 00:21:19.146125    4399 out.go:304] Setting JSON to false
	I1118 00:21:19.181833    4399 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":7441,"bootTime":1637187438,"procs":190,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1118 00:21:19.181919    4399 start.go:122] virtualization: kvm guest
	I1118 00:21:19.184549    4399 out.go:176] * [false-20211118002119-20973] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
	I1118 00:21:19.186142    4399 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/kubeconfig
	I1118 00:21:19.184689    4399 notify.go:174] Checking for updates...
	I1118 00:21:19.187871    4399 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1118 00:21:19.189326    4399 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube
	I1118 00:21:19.190888    4399 out.go:176]   - MINIKUBE_LOCATION=12739
	I1118 00:21:19.191443    4399 config.go:176] Loaded profile config "kubernetes-upgrade-20211118001853-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.4-rc.0
	I1118 00:21:19.191557    4399 config.go:176] Loaded profile config "pause-20211118001848-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.3
	I1118 00:21:19.191628    4399 config.go:176] Loaded profile config "stopped-upgrade-20211118001857-20973": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1118 00:21:19.191664    4399 driver.go:343] Setting default libvirt URI to qemu:///system
	I1118 00:21:19.222161    4399 out.go:176] * Using the kvm2 driver based on user configuration
	I1118 00:21:19.222187    4399 start.go:280] selected driver: kvm2
	I1118 00:21:19.222192    4399 start.go:775] validating driver "kvm2" against <nil>
	I1118 00:21:19.222207    4399 start.go:786] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1118 00:21:19.224327    4399 out.go:176] 
	W1118 00:21:19.224450    4399 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1118 00:21:19.225961    4399 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20211118002119-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20211118002119-20973
--- PASS: TestNetworkPlugins/group/false (0.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211118001848-20973 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20211118001848-20973 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20211118001848-20973 --output=json --layout=cluster: exit status 2 (313.320765ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20211118001848-20973","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20211118001848-20973","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20211118001848-20973 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211118001848-20973 --alsologtostderr -v=5
pause_test.go:108: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20211118001848-20973 --alsologtostderr -v=5: (5.541516384s)
--- PASS: TestPause/serial/PauseAgain (5.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20211118001848-20973 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20211118001848-20973 --alsologtostderr -v=5: (1.031315421s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20211118001857-20973
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20211118001857-20973: (1.118805319s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0
E1118 00:22:50.702314   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m26.911396787s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (120.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0: (2m0.583391374s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (120.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (106.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211118002307-20973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3
E1118 00:24:47.650944   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20211118002307-20973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3: (1m46.01502021s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (106.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c951b66c-ede1-4598-a012-2447b7c94d56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [c951b66c-ede1-4598-a012-2447b7c94d56] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.020371943s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211118002307-20973 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b1c3885a-e91f-4e95-9da8-d7e477ce9ee0] Pending
helpers_test.go:342: "busybox" [b1c3885a-e91f-4e95-9da8-d7e477ce9ee0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [b1c3885a-e91f-4e95-9da8-d7e477ce9ee0] Running
E1118 00:25:00.069428   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.034910562s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211118002307-20973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20211118002307-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211118002307-20973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20211118002250-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20211118002307-20973 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20211118002307-20973 --alsologtostderr -v=3: (1m32.704428766s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20211118002250-20973 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20211118002250-20973 --alsologtostderr -v=3: (1m32.513884357s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211118002250-20973 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [01c33d07-4806-11ec-8e44-52540009518b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [01c33d07-4806-11ec-8e44-52540009518b] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.028287469s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211118002250-20973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20211118002250-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211118002250-20973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (94.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20211118002250-20973 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20211118002250-20973 --alsologtostderr -v=3: (1m34.98576323s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (94.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (87.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211118002540-20973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20211118002540-20973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3: (1m27.325296049s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (87.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973: exit status 7 (74.897899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20211118002307-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (424.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211118002307-20973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20211118002307-20973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3: (7m3.925383303s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (424.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973: exit status 7 (98.549223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20211118002250-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (361.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0
E1118 00:26:57.019444   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0: (6m1.418923053s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (361.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973: exit status 7 (77.602805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20211118002250-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20211118002250-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.635044302s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (528.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20211118002250-20973 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (8m48.464534679s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (528.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211118002540-20973 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5794cdbc-c4cf-4b88-91ff-57903ea27b0a] Pending
helpers_test.go:342: "busybox" [5794cdbc-c4cf-4b88-91ff-57903ea27b0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5794cdbc-c4cf-4b88-91ff-57903ea27b0a] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.064941173s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211118002540-20973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20211118002540-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211118002540-20973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20211118002540-20973 --alsologtostderr -v=3
E1118 00:27:27.855891   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20211118002540-20973 --alsologtostderr -v=3: (1m32.518594736s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973: exit status 7 (82.386084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20211118002540-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (389.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211118002540-20973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3
E1118 00:29:47.651614   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1118 00:31:57.020464   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1118 00:32:11.180368   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1118 00:32:27.856564   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20211118002540-20973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.3: (6m29.356709178s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (389.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-44bzg" [ba2625ca-7319-4310-b56c-cb2117474f21] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-654cf69797-44bzg" [ba2625ca-7319-4310-b56c-cb2117474f21] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.019392822s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-44bzg" [ba2625ca-7319-4310-b56c-cb2117474f21] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01345824s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211118002250-20973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20211118002250-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973: exit status 2 (258.295403ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973: exit status 2 (264.071628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20211118002250-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211118002250-20973 -n no-preload-20211118002250-20973
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211118003303-20973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211118003303-20973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0: (1m12.700231052s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-5ms48" [e6489fd0-78f3-4467-84e5-b2630a12e920] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-654cf69797-5ms48" [e6489fd0-78f3-4467-84e5-b2630a12e920] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.020961692s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-5ms48" [e6489fd0-78f3-4467-84e5-b2630a12e920] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01237546s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211118002307-20973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20211118002307-20973 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20211118002307-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973: exit status 2 (277.095988ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973: exit status 2 (282.033027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20211118002307-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20211118002307-20973 -n embed-certs-20211118002307-20973
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20211118002118-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20211118002118-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (1m24.36187728s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20211118003303-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20211118003303-20973 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.292163983s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20211118003303-20973 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20211118003303-20973 --alsologtostderr -v=3: (5.184282869s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973: exit status 7 (80.026829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20211118003303-20973 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (84.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211118003303-20973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0
E1118 00:34:47.651412   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
E1118 00:34:51.865467   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:51.870748   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:51.881058   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:51.901375   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:51.941627   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:52.021975   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:52.182281   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:52.502845   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:53.143748   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:34:58.803990   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:35:01.364763   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:35:06.485493   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:35:16.726701   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211118003303-20973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.4-rc.0: (1m24.222473062s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (84.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20211118002118-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20211118002118-20973 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-66gmt" [be13e813-7132-4046-9844-e9e6f3452b31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-66gmt" [be13e813-7132-4046-9844-e9e6f3452b31] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.028145675s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-2ln57" [6628183e-5a72-4180-8897-49953a3edd08] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-654cf69797-2ln57" [6628183e-5a72-4180-8897-49953a3edd08] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.021760944s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-654cf69797-2ln57" [6628183e-5a72-4180-8897-49953a3edd08] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01286882s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211118002540-20973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20211118002118-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20211118002118-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20211118002118-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m42.17866006s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20211118002540-20973 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20211118002540-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973: exit status 2 (285.057442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973: exit status 2 (281.552891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20211118002540-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
E1118 00:35:37.207137   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20211118002540-20973 -n default-k8s-different-port-20211118002540-20973
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (126.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (2m6.968193884s)
--- PASS: TestNetworkPlugins/group/cilium/Start (126.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20211118003303-20973 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20211118003303-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973: exit status 2 (264.025647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973: exit status 2 (262.662039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20211118003303-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211118003303-20973 -n newest-cni-20211118003303-20973
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (131.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p calico-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m11.579467314s)
--- PASS: TestNetworkPlugins/group/calico/Start (131.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-7c5589b6d7-rhc77" [581241ec-4807-11ec-92b9-52540009518b] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018159096s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (9.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-7c5589b6d7-rhc77" [581241ec-4807-11ec-92b9-52540009518b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.794576299s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211118002250-20973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context old-k8s-version-20211118002250-20973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (3.240047475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (9.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20211118002250-20973 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20211118002250-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973: exit status 2 (258.548849ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973: exit status 2 (265.248108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20211118002250-20973 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20211118002250-20973 -n old-k8s-version-20211118002250-20973
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)
E1118 00:38:30.313993   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (105.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd
E1118 00:36:18.167578   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:36:57.019492   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/ingress-addon-legacy-20211117234527-20973/client.crt: no such file or directory
E1118 00:37:08.391686   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.396973   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.407113   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.427396   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.467791   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.547978   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:08.708347   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:09.028886   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:09.669584   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:10.950823   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
E1118 00:37:13.511032   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd: (1m45.631694509s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (105.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-kk8dm" [acfa7cc5-4136-4fb7-889c-c67c654f26ca] Running
E1118 00:37:18.631627   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028924603s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-bmk8r" [2caa4639-e676-4e0b-ad01-6134adbd6950] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-bmk8r" [2caa4639-e676-4e0b-ad01-6134adbd6950] Running
E1118 00:37:27.855802   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/addons-20211117233455-20973/client.crt: no such file or directory
E1118 00:37:28.872272   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.016488121s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E1118 00:37:40.088544   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m32.27243447s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-jflk5" [a671f08f-755f-48cd-8f01-2a8e6d528f80] Running
E1118 00:37:49.352844   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.046164983s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-xscvs" [eac83106-b5ba-4310-8498-591ce82d4028] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-xscvs" [eac83106-b5ba-4310-8498-591ce82d4028] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.016586079s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (13.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-pfqhj" [77470319-e7e8-471e-9699-5dfdc3154753] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-pfqhj" [77470319-e7e8-471e-9699-5dfdc3154753] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 13.324303157s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (13.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-w7skb" [8d06f60c-30f9-4e66-bfdc-f41f6992c8bd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.241809827s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20211118002119-20973 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-linux-amd64 ssh -p calico-20211118002119-20973 "pgrep -a kubelet": (2.467962403s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (2.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-p96qn" [6dedf687-ec75-4902-913a-206daaf07cef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-7bfd7f67bc-p96qn" [6dedf687-ec75-4902-913a-206daaf07cef] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.039284983s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m23.812459639s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20211118002119-20973 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m42.142903175s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-r9826" [ed546b55-bf3a-47c2-9b5c-cd69aae70218] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-r9826" [ed546b55-bf3a-47c2-9b5c-cd69aae70218] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012329217s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (8.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-mfd24" [6e0e97db-65ae-4c6b-ae25-4ab48f65886a] Pending: Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni]) / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:342: "kube-flannel-ds-amd64-mfd24" [6e0e97db-65ae-4c6b-ae25-4ab48f65886a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:342: "kube-flannel-ds-amd64-mfd24" [6e0e97db-65ae-4c6b-ae25-4ab48f65886a] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 8.018567551s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (8.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context flannel-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-qlj6t" [459cdaaf-0f03-4152-9432-910c6e1412e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1118 00:39:47.651181   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/functional-20211117234207-20973/client.crt: no such file or directory
helpers_test.go:342: "netcat-7bfd7f67bc-qlj6t" [459cdaaf-0f03-4152-9432-910c6e1412e1] Running
E1118 00:39:51.865540   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/no-preload-20211118002250-20973/client.crt: no such file or directory
E1118 00:39:52.234721   20973 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12739-17057-24e369002aeb518840e093d9fb528e6077bdad6e/.minikube/profiles/default-k8s-different-port-20211118002540-20973/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0100893s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:163: (dbg) Run:  kubectl --context flannel-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:182: (dbg) Run:  kubectl --context flannel-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:232: (dbg) Run:  kubectl --context flannel-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20211118002119-20973 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20211118002119-20973 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-7bfd7f67bc-48p7l" [a0737b3f-fcf6-44fa-a90c-720671fab7b2] Pending
helpers_test.go:342: "netcat-7bfd7f67bc-48p7l" [a0737b3f-fcf6-44fa-a90c-720671fab7b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-7bfd7f67bc-48p7l" [a0737b3f-fcf6-44fa-a90c-720671fab7b2] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.014554317s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211118002119-20973 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20211118002119-20973 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/285)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:401: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:62: Skipping until https://github.com/kubernetes/minikube/issues/12301 is resolved.
--- SKIP: TestFunctional/parallel/MountCmd/any-port (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:35: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:74: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:39: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211118002540-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20211118002540-20973
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20211118002118-20973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20211118002118-20973
--- SKIP: TestNetworkPlugins/group/kubenet (0.30s)

                                                
                                    
Copied to clipboard