Test Report: none_Linux 17735

                    
                      92ccbd1049dad7c606832f9da24cf8bb40191acf:2024-03-27:33769
                    
                

Test fail (1/174)

Order failed test Duration
38 TestAddons/parallel/Registry 205.95
x
+
TestAddons/parallel/Registry (205.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 11.257644ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00411993s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005051033s
addons_test.go:340: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (33.296979269s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to use a TTY - input is not a terminal or the right kind of file
	If you don't see a command prompt, try pressing enter.
	warning: couldn't attach to pod/registry-test, falling back to streaming logs: 
	pod default/registry-test terminated (Error)

                                                
                                                
** /stderr **
addons_test.go:347: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:351: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/03/27 19:48:47 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:48:47 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:47 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:48:48 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:48 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:48:50 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:50 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:48:54 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:54 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:02 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:02 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:02 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:02 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:03 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:03 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:05 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:05 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:09 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:09 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:17 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:18 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:18 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:18 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:19 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:19 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:21 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:21 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:25 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:25 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:33 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:35 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:35 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:35 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:36 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:36 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:38 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:38 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:42 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:42 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:50 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:52 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:52 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:52 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:53 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:53 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:55 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:55 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:59 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:59 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:07 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:08 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:08 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:08 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:09 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:09 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:11 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:11 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:15 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:15 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:23 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:25 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:25 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:25 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:26 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:26 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:28 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:28 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:32 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:32 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:40 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:48 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:48 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:48 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:49 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:49 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:51 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:51 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:55 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:55 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:51:03 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:12 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:51:12 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:12 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:51:13 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:13 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:51:15 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:15 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:51:19 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:19 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:51:27 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
addons_test.go:385: failed to check external access to http://10.128.15.240:5000: GET http://10.128.15.240:5000 giving up after 5 attempt(s): Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.035851996s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force                  |          |         |                |                     |                     |
	|         | --alsologtostderr                    |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |                |                     |                     |
	|         | --container-runtime=docker           |          |         |                |                     |                     |
	|         | --driver=none                        |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force                  |          |         |                |                     |                     |
	|         | --alsologtostderr                    |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3         |          |         |                |                     |                     |
	|         | --container-runtime=docker           |          |         |                |                     |                     |
	|         | --driver=none                        |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| start   | -o=json --download-only -p           | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC |                     |
	|         | minikube --force --alsologtostderr   |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0  |          |         |                |                     |                     |
	|         | --container-runtime=docker           |          |         |                |                     |                     |
	|         | --driver=none                        |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |                |                     |                     |
	|         | --binary-mirror                      |          |         |                |                     |                     |
	|         | http://127.0.0.1:43581               |          |         |                |                     |                     |
	|         | --driver=none                        |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	|         | -v=1 --memory=2048                   |          |         |                |                     |                     |
	|         | --wait=true --driver=none            |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:48 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |                |                     |                     |
	|         | --addons=registry                    |          |         |                |                     |                     |
	|         | --addons=metrics-server              |          |         |                |                     |                     |
	|         | --addons=volumesnapshots             |          |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |                |                     |                     |
	|         | --addons=gcp-auth                    |          |         |                |                     |                     |
	|         | --addons=cloud-spanner               |          |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |                |                     |                     |
	|         | --addons=yakd --driver=none          |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |                |                     |                     |
	|         | --addons=helm-tiller                 |          |         |                |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:48 UTC | 27 Mar 24 19:48 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:51 UTC | 27 Mar 24 19:51 UTC |
	|         | registry --alsologtostderr           |          |         |                |                     |                     |
	|         | -v=1                                 |          |         |                |                     |                     |
	|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:46:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:46:54.297113  785442 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:46:54.297417  785442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:54.297428  785442 out.go:304] Setting ErrFile to fd 2...
	I0327 19:46:54.297432  785442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:54.297666  785442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	I0327 19:46:54.299145  785442 out.go:298] Setting JSON to false
	I0327 19:46:54.300396  785442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12552,"bootTime":1711556262,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:46:54.300480  785442 start.go:139] virtualization: kvm guest
	I0327 19:46:54.302653  785442 out.go:177] * minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:46:54.304797  785442 notify.go:220] Checking for updates...
	I0327 19:46:54.304811  785442 out.go:177]   - MINIKUBE_LOCATION=17735
	W0327 19:46:54.304730  785442 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:46:54.306386  785442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:46:54.308073  785442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:46:54.309694  785442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:46:54.311064  785442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 19:46:54.312456  785442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:46:54.313975  785442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:46:54.326966  785442 out.go:177] * Using the none driver based on user configuration
	I0327 19:46:54.328502  785442 start.go:297] selected driver: none
	I0327 19:46:54.328523  785442 start.go:901] validating driver "none" against <nil>
	I0327 19:46:54.328542  785442 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:46:54.328575  785442 start.go:1733] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0327 19:46:54.328897  785442 out.go:239] ! The 'none' driver does not respect the --memory flag
	I0327 19:46:54.329397  785442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 19:46:54.329627  785442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:46:54.329697  785442 cni.go:84] Creating CNI manager for ""
	I0327 19:46:54.329711  785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 19:46:54.329727  785442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 19:46:54.329771  785442 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:46:54.331429  785442 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0327 19:46:54.333098  785442 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json ...
	I0327 19:46:54.333138  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json: {Name:mkc12f016488e18252a34aa57adffbeb5566b2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:54.333273  785442 start.go:360] acquireMachinesLock for minikube: {Name:mk84f2ad31410d090434f21fe1137802c30e2ddd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 19:46:54.333306  785442 start.go:364] duration metric: took 18.881µs to acquireMachinesLock for "minikube"
	I0327 19:46:54.333319  785442 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 19:46:54.333381  785442 start.go:125] createHost starting for "" (driver="none")
	I0327 19:46:54.335042  785442 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0327 19:46:54.336368  785442 exec_runner.go:51] Run: systemctl --version
	I0327 19:46:54.339074  785442 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0327 19:46:54.339114  785442 client.go:168] LocalClient.Create starting
	I0327 19:46:54.339169  785442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca.pem
	I0327 19:46:54.339206  785442 main.go:141] libmachine: Decoding PEM data...
	I0327 19:46:54.339226  785442 main.go:141] libmachine: Parsing certificate...
	I0327 19:46:54.339278  785442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-771440/.minikube/certs/cert.pem
	I0327 19:46:54.339301  785442 main.go:141] libmachine: Decoding PEM data...
	I0327 19:46:54.339313  785442 main.go:141] libmachine: Parsing certificate...
	I0327 19:46:54.339627  785442 client.go:171] duration metric: took 504.972µs to LocalClient.Create
	I0327 19:46:54.339653  785442 start.go:167] duration metric: took 583.18µs to libmachine.API.Create "minikube"
	I0327 19:46:54.339668  785442 start.go:293] postStartSetup for "minikube" (driver="none")
	I0327 19:46:54.339709  785442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 19:46:54.339751  785442 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 19:46:54.348006  785442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 19:46:54.348035  785442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 19:46:54.348045  785442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 19:46:54.350068  785442 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0327 19:46:54.351299  785442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-771440/.minikube/addons for local assets ...
	I0327 19:46:54.351358  785442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-771440/.minikube/files for local assets ...
	I0327 19:46:54.351378  785442 start.go:296] duration metric: took 11.699938ms for postStartSetup
	I0327 19:46:54.351970  785442 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json ...
	I0327 19:46:54.352102  785442 start.go:128] duration metric: took 18.709592ms to createHost
	I0327 19:46:54.352117  785442 start.go:83] releasing machines lock for "minikube", held for 18.803417ms
	I0327 19:46:54.352457  785442 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 19:46:54.352536  785442 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0327 19:46:54.354478  785442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 19:46:54.354517  785442 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:46:54.365437  785442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 19:46:54.365470  785442 start.go:494] detecting cgroup driver to use...
	I0327 19:46:54.365501  785442 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 19:46:54.365662  785442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:46:54.387549  785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 19:46:54.398146  785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 19:46:54.408684  785442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 19:46:54.408777  785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 19:46:54.418207  785442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 19:46:54.426947  785442 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 19:46:54.438080  785442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 19:46:54.449398  785442 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 19:46:54.458364  785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 19:46:54.468297  785442 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 19:46:54.478156  785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 19:46:54.514427  785442 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 19:46:54.524589  785442 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 19:46:54.533343  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:46:54.734315  785442 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0327 19:46:54.796360  785442 start.go:494] detecting cgroup driver to use...
	I0327 19:46:54.796421  785442 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 19:46:54.796556  785442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:46:54.817115  785442 exec_runner.go:51] Run: which cri-dockerd
	I0327 19:46:54.818139  785442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 19:46:54.827471  785442 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0327 19:46:54.827499  785442 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0327 19:46:54.827551  785442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0327 19:46:54.835777  785442 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 19:46:54.835957  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube469473553 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0327 19:46:54.844359  785442 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0327 19:46:55.048434  785442 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0327 19:46:55.259024  785442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 19:46:55.259213  785442 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0327 19:46:55.259230  785442 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0327 19:46:55.259278  785442 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0327 19:46:55.271662  785442 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0327 19:46:55.271908  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1453567436 /etc/docker/daemon.json
	I0327 19:46:55.282342  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:46:55.511690  785442 exec_runner.go:51] Run: sudo systemctl restart docker
	I0327 19:46:55.785065  785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 19:46:55.796163  785442 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0327 19:46:55.811325  785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 19:46:55.821978  785442 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0327 19:46:56.022280  785442 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0327 19:46:56.220580  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:46:56.429336  785442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0327 19:46:56.445339  785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 19:46:56.456952  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:46:56.680854  785442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0327 19:46:56.751285  785442 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 19:46:56.751378  785442 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0327 19:46:56.752850  785442 start.go:562] Will wait 60s for crictl version
	I0327 19:46:56.752928  785442 exec_runner.go:51] Run: which crictl
	I0327 19:46:56.753970  785442 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0327 19:46:56.798136  785442 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0327 19:46:56.798205  785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0327 19:46:56.819045  785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0327 19:46:56.842572  785442 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0327 19:46:56.842651  785442 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0327 19:46:56.845432  785442 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0327 19:46:56.846891  785442 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 19:46:56.847015  785442 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 19:46:56.847029  785442 kubeadm.go:928] updating node { 10.128.15.240 8443 v1.29.3 docker true true} ...
	I0327 19:46:56.847139  785442 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-15 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.240 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0327 19:46:56.847192  785442 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0327 19:46:56.894393  785442 cni.go:84] Creating CNI manager for ""
	I0327 19:46:56.894427  785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 19:46:56.894438  785442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 19:46:56.894475  785442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.240 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-15 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 19:46:56.894642  785442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-15"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 19:46:56.894704  785442 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 19:46:56.902934  785442 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0327 19:46:56.902990  785442 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0327 19:46:56.911456  785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0327 19:46:56.911470  785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0327 19:46:56.911482  785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0327 19:46:56.911501  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0327 19:46:56.911535  785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:46:56.911546  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0327 19:46:56.923953  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0327 19:46:56.954436  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3780611543 /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 19:46:56.970895  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube201494140 /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 19:46:57.051291  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4245865079 /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 19:46:57.139749  785442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 19:46:57.148685  785442 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0327 19:46:57.148707  785442 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0327 19:46:57.148742  785442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0327 19:46:57.156979  785442 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0327 19:46:57.157155  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3168944734 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0327 19:46:57.165811  785442 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0327 19:46:57.165865  785442 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0327 19:46:57.165907  785442 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0327 19:46:57.173594  785442 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 19:46:57.173741  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2299999754 /lib/systemd/system/kubelet.service
	I0327 19:46:57.182601  785442 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0327 19:46:57.182727  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320304341 /var/tmp/minikube/kubeadm.yaml.new
	I0327 19:46:57.191495  785442 exec_runner.go:51] Run: grep 10.128.15.240	control-plane.minikube.internal$ /etc/hosts
	I0327 19:46:57.192829  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:46:57.404187  785442 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0327 19:46:57.417961  785442 certs.go:68] Setting up /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube for IP: 10.128.15.240
	I0327 19:46:57.417986  785442 certs.go:194] generating shared ca certs ...
	I0327 19:46:57.418005  785442 certs.go:226] acquiring lock for ca certs: {Name:mk49622af302dd5fe131a9430f1e35c7c09bed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.418175  785442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.key
	I0327 19:46:57.418229  785442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.key
	I0327 19:46:57.418242  785442 certs.go:256] generating profile certs ...
	I0327 19:46:57.418317  785442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key
	I0327 19:46:57.418338  785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt with IP's: []
	I0327 19:46:57.547168  785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt ...
	I0327 19:46:57.547204  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: {Name:mk87f5f426e4a0e3131a1f1fd9ae6dbcf7a19426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.547380  785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key ...
	I0327 19:46:57.547395  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key: {Name:mk65048bbdd8fd3ae6de6c6f48065f5c0dd6a82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.547481  785442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d
	I0327 19:46:57.547498  785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.240]
	I0327 19:46:57.718278  785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d ...
	I0327 19:46:57.718313  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d: {Name:mk159a4f77c97c05a459f1d9737dd9a3dd096860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.718481  785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d ...
	I0327 19:46:57.718504  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d: {Name:mk9aa3a1b68d7ecfb7f221c03b8c794d334e3058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.718583  785442 certs.go:381] copying /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d -> /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt
	I0327 19:46:57.718704  785442 certs.go:385] copying /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d -> /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key
	I0327 19:46:57.718787  785442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key
	I0327 19:46:57.718811  785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0327 19:46:57.897576  785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt ...
	I0327 19:46:57.897622  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt: {Name:mka1edbad7a6d97a670e10222a0268c2708c7c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.897799  785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key ...
	I0327 19:46:57.897821  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key: {Name:mk69a9bfba947e9c4681f82332e9a482b7546864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:46:57.898050  785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 19:46:57.898107  785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca.pem (1078 bytes)
	I0327 19:46:57.898152  785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/cert.pem (1123 bytes)
	I0327 19:46:57.898177  785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/key.pem (1679 bytes)
	I0327 19:46:57.898893  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 19:46:57.899047  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3511566354 /var/lib/minikube/certs/ca.crt
	I0327 19:46:57.908180  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0327 19:46:57.908302  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423390180 /var/lib/minikube/certs/ca.key
	I0327 19:46:57.916481  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 19:46:57.916635  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3343948971 /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 19:46:57.925132  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 19:46:57.925250  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube831950845 /var/lib/minikube/certs/proxy-client-ca.key
	I0327 19:46:57.934151  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0327 19:46:57.934275  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3350324935 /var/lib/minikube/certs/apiserver.crt
	I0327 19:46:57.941918  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 19:46:57.942037  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3478674603 /var/lib/minikube/certs/apiserver.key
	I0327 19:46:57.949469  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 19:46:57.949608  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube113090513 /var/lib/minikube/certs/proxy-client.crt
	I0327 19:46:57.956926  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 19:46:57.957071  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3982661572 /var/lib/minikube/certs/proxy-client.key
	I0327 19:46:57.965023  785442 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0327 19:46:57.965044  785442 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:57.965080  785442 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:57.972890  785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 19:46:57.973034  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1935575466 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:57.980327  785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 19:46:57.980438  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398604646 /var/lib/minikube/kubeconfig
	I0327 19:46:57.988456  785442 exec_runner.go:51] Run: openssl version
	I0327 19:46:57.991178  785442 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 19:46:57.999642  785442 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:58.000836  785442 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Mar 27 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:58.000875  785442 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:46:58.003687  785442 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 19:46:58.012131  785442 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 19:46:58.013189  785442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 19:46:58.013230  785442 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:46:58.013357  785442 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 19:46:58.028166  785442 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 19:46:58.037323  785442 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 19:46:58.045783  785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0327 19:46:58.065867  785442 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 19:46:58.074622  785442 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 19:46:58.074647  785442 kubeadm.go:156] found existing configuration files:
	
	I0327 19:46:58.074690  785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 19:46:58.083008  785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 19:46:58.083073  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 19:46:58.093599  785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 19:46:58.101612  785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 19:46:58.101673  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 19:46:58.109156  785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 19:46:58.117274  785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 19:46:58.117320  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 19:46:58.125502  785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 19:46:58.133647  785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 19:46:58.133718  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 19:46:58.143565  785442 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 19:46:58.187062  785442 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 19:46:58.187123  785442 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 19:46:58.317646  785442 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 19:46:58.317702  785442 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 19:46:58.317718  785442 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 19:46:58.317724  785442 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 19:46:58.628749  785442 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 19:46:58.631820  785442 out.go:204]   - Generating certificates and keys ...
	I0327 19:46:58.631875  785442 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 19:46:58.631892  785442 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 19:46:58.830610  785442 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 19:46:59.063309  785442 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 19:46:59.228988  785442 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 19:46:59.336539  785442 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 19:46:59.466481  785442 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 19:46:59.466513  785442 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I0327 19:46:59.737638  785442 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 19:46:59.737764  785442 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I0327 19:46:59.957514  785442 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 19:47:00.082020  785442 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 19:47:00.503282  785442 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 19:47:00.503426  785442 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 19:47:00.634626  785442 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 19:47:01.062092  785442 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 19:47:01.146869  785442 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 19:47:01.258176  785442 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 19:47:01.349045  785442 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 19:47:01.349522  785442 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 19:47:01.352618  785442 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 19:47:01.355243  785442 out.go:204]   - Booting up control plane ...
	I0327 19:47:01.355276  785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 19:47:01.355301  785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 19:47:01.355315  785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 19:47:01.371856  785442 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 19:47:01.372696  785442 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 19:47:01.372720  785442 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 19:47:01.585355  785442 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 19:47:07.087662  785442 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502273 seconds
	I0327 19:47:07.102275  785442 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 19:47:07.114131  785442 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 19:47:07.635898  785442 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 19:47:07.635927  785442 kubeadm.go:309] [mark-control-plane] Marking the node ubuntu-20-agent-15 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 19:47:08.145338  785442 kubeadm.go:309] [bootstrap-token] Using token: bv13wn.i50u7bhta9ujrc85
	I0327 19:47:08.147207  785442 out.go:204]   - Configuring RBAC rules ...
	I0327 19:47:08.147250  785442 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 19:47:08.152619  785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 19:47:08.159958  785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 19:47:08.162979  785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 19:47:08.166099  785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 19:47:08.169479  785442 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 19:47:08.180439  785442 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 19:47:08.553958  785442 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 19:47:08.580529  785442 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 19:47:08.581520  785442 kubeadm.go:309] 
	I0327 19:47:08.581538  785442 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 19:47:08.581542  785442 kubeadm.go:309] 
	I0327 19:47:08.581546  785442 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 19:47:08.581550  785442 kubeadm.go:309] 
	I0327 19:47:08.581553  785442 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 19:47:08.581557  785442 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 19:47:08.581580  785442 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 19:47:08.581584  785442 kubeadm.go:309] 
	I0327 19:47:08.581588  785442 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 19:47:08.581592  785442 kubeadm.go:309] 
	I0327 19:47:08.581597  785442 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 19:47:08.581600  785442 kubeadm.go:309] 
	I0327 19:47:08.581605  785442 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 19:47:08.581609  785442 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 19:47:08.581613  785442 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 19:47:08.581616  785442 kubeadm.go:309] 
	I0327 19:47:08.581618  785442 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 19:47:08.581621  785442 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 19:47:08.581624  785442 kubeadm.go:309] 
	I0327 19:47:08.581627  785442 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bv13wn.i50u7bhta9ujrc85 \
	I0327 19:47:08.581630  785442 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:476a7cd2dcbebb6a8f56145e16668b3c5b6b5cfe98b74adc4ab35b9910ca8ec9 \
	I0327 19:47:08.581632  785442 kubeadm.go:309] 	--control-plane 
	I0327 19:47:08.581635  785442 kubeadm.go:309] 
	I0327 19:47:08.581638  785442 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 19:47:08.581640  785442 kubeadm.go:309] 
	I0327 19:47:08.581643  785442 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bv13wn.i50u7bhta9ujrc85 \
	I0327 19:47:08.581652  785442 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:476a7cd2dcbebb6a8f56145e16668b3c5b6b5cfe98b74adc4ab35b9910ca8ec9 
	I0327 19:47:08.585272  785442 cni.go:84] Creating CNI manager for ""
	I0327 19:47:08.585299  785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 19:47:08.587572  785442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 19:47:08.589088  785442 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0327 19:47:08.599776  785442 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 19:47:08.599938  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2864429203 /etc/cni/net.d/1-k8s.conflist
	I0327 19:47:08.610682  785442 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 19:47:08.610777  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:08.610789  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-15 minikube.k8s.io/updated_at=2024_03_27T19_47_08_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0327 19:47:08.620564  785442 ops.go:34] apiserver oom_adj: -16
	I0327 19:47:08.710270  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:09.210914  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:09.710359  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:10.210424  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:10.710890  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:11.211246  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:11.711364  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:12.211117  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:12.711204  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:13.210416  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:13.710747  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:14.211045  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:14.710669  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:15.211156  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:15.710411  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:16.210652  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:16.711035  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:17.211395  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:17.710626  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:18.210496  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:18.710873  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:19.211105  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:19.711191  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:20.210751  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:20.711118  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:21.211041  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:21.711025  785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 19:47:21.788484  785442 kubeadm.go:1107] duration metric: took 13.177793351s to wait for elevateKubeSystemPrivileges
	W0327 19:47:21.788531  785442 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 19:47:21.788574  785442 kubeadm.go:393] duration metric: took 23.775311524s to StartCluster
	I0327 19:47:21.788601  785442 settings.go:142] acquiring lock: {Name:mk6aaa0aa244fc49fbd9078e2807c923dc87e9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:47:21.788676  785442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:47:21.789457  785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/kubeconfig: {Name:mkcbe4a4107c2ed93be9cf8bf198b7dda208e9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:47:21.789694  785442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 19:47:21.789780  785442 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 19:47:21.789920  785442 addons.go:69] Setting yakd=true in profile "minikube"
	I0327 19:47:21.789931  785442 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0327 19:47:21.789947  785442 addons.go:69] Setting registry=true in profile "minikube"
	I0327 19:47:21.789968  785442 addons.go:234] Setting addon yakd=true in "minikube"
	I0327 19:47:21.789980  785442 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0327 19:47:21.790000  785442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 19:47:21.790019  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790033  785442 addons.go:234] Setting addon registry=true in "minikube"
	I0327 19:47:21.790046  785442 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0327 19:47:21.790073  785442 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0327 19:47:21.790087  785442 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0327 19:47:21.790064  785442 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0327 19:47:21.790112  785442 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0327 19:47:21.790113  785442 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0327 19:47:21.790119  785442 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0327 19:47:21.790139  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790140  785442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0327 19:47:21.790150  785442 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0327 19:47:21.790159  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790160  785442 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0327 19:47:21.790176  785442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0327 19:47:21.790205  785442 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0327 19:47:21.790274  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790687  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.790711  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.790729  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.790739  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.790764  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.790772  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.790806  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.790102  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790826  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.790839  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.790850  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.790874  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.790876  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.790937  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.790953  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.790990  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.791115  785442 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0327 19:47:21.790076  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.791190  785442 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0327 19:47:21.791223  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.790142  785442 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0327 19:47:21.791942  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.791965  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.791995  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.792011  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.793824  785442 out.go:177] * Configuring local host environment ...
	I0327 19:47:21.790091  785442 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0327 19:47:21.790812  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.789968  785442 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0327 19:47:21.792337  785442 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0327 19:47:21.792754  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	W0327 19:47:21.795506  785442 out.go:239] * 
	W0327 19:47:21.795525  785442 out.go:239] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0327 19:47:21.795538  785442 out.go:239] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0327 19:47:21.795546  785442 out.go:239] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0327 19:47:21.795553  785442 out.go:239] * 
	I0327 19:47:21.793967  785442 addons.go:234] Setting addon metrics-server=true in "minikube"
	W0327 19:47:21.795598  785442 out.go:239] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0327 19:47:21.795612  785442 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0327 19:47:21.795622  785442 out.go:239] * 
	I0327 19:47:21.795625  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.793981  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.822769  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.822827  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.822997  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.794016  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.823444  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.823763  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.823798  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.823837  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.824103  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.824132  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.824165  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.824383  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.824966  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.824994  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.825029  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.825789  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.825826  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.825897  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.825951  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0327 19:47:21.826399  785442 out.go:239]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0327 19:47:21.826416  785442 out.go:239]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0327 19:47:21.826428  785442 out.go:239] * 
	W0327 19:47:21.826439  785442 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0327 19:47:21.826541  785442 start.go:234] Will wait 6m0s for node &{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 19:47:21.794039  785442 mustload.go:65] Loading cluster: minikube
	I0327 19:47:21.794047  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.829173  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.829251  785442 out.go:177] * Verifying Kubernetes components...
	I0327 19:47:21.829722  785442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 19:47:21.831156  785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0327 19:47:21.832694  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.832720  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.832768  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.847110  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.847881  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.847956  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.849401  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.850284  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.850347  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.852392  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.856934  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.860833  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.865501  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.865539  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.865574  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.865750  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.865501  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.865824  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.870089  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.870161  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.870703  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.870754  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.873622  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.873690  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.877006  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.877185  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.877474  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.877529  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.878601  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.878629  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.881117  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.889502  785442 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 19:47:21.883611  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.884859  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.887347  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.888039  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.888214  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.890326  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.891724  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.891774  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.892054  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.892340  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.892491  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.893001  785442 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 19:47:21.893234  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.893254  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.893034  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 19:47:21.893479  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2459498510 /etc/kubernetes/addons/ig-namespace.yaml
	I0327 19:47:21.893597  785442 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0327 19:47:21.893637  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.894586  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.894605  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.894635  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.895974  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.899276  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.899327  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.901501  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.903798  785442 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 19:47:21.905345  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.905372  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.905482  785442 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 19:47:21.905514  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 19:47:21.905648  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube657095272 /etc/kubernetes/addons/deployment.yaml
	I0327 19:47:21.905739  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.907572  785442 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 19:47:21.909008  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.909244  785442 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 19:47:21.909349  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 19:47:21.909519  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube572783290 /etc/kubernetes/addons/yakd-ns.yaml
	I0327 19:47:21.911054  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.911119  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.915475  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.926809  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 19:47:21.919069  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.919105  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.919837  785442 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0327 19:47:21.921142  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.925131  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.929461  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.929788  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 19:47:21.929821  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 19:47:21.930026  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.930545  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.930565  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.935868  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 19:47:21.931109  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2057918794 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 19:47:21.931279  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:21.934625  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.935375  785442 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 19:47:21.937316  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 19:47:21.937429  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube22919953 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 19:47:21.938891  785442 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 19:47:21.938919  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 19:47:21.939019  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube816787765 /etc/kubernetes/addons/yakd-sa.yaml
	I0327 19:47:21.942146  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 19:47:21.942419  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.942500  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.940316  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:21.942906  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.943746  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 19:47:21.943771  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.944927  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.947900  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.947997  785442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 19:47:21.948126  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:21.948199  785442 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 19:47:21.952453  785442 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 19:47:21.952504  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 19:47:21.952657  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2469551737 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 19:47:21.951435  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 19:47:21.951957  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:21.954665  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.955928  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 19:47:21.956405  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:21.956021  785442 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0327 19:47:21.956410  785442 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 19:47:21.960187  785442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 19:47:21.958482  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 19:47:21.959438  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.960662  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.960758  785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 19:47:21.961686  785442 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 19:47:21.961797  785442 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 19:47:21.962126  785442 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:47:21.963737  785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0327 19:47:21.963979  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 19:47:21.963997  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 19:47:21.964008  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 19:47:21.964077  785442 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 19:47:21.967285  785442 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 19:47:21.966048  785442 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0327 19:47:21.966074  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0327 19:47:21.966200  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1979897496 /etc/kubernetes/addons/ig-role.yaml
	I0327 19:47:21.966245  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 19:47:21.966823  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube176135027 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 19:47:21.966870  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3412833599 /etc/kubernetes/addons/yakd-crb.yaml
	I0327 19:47:21.969752  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2231007622 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0327 19:47:21.969908  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 19:47:21.969975  785442 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:47:21.970116  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2381010362 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 19:47:21.973376  785442 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 19:47:21.971762  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 19:47:21.971817  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:47:21.972242  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:21.972498  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 19:47:21.974861  785442 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 19:47:21.974893  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 19:47:21.975029  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4023595453 /etc/kubernetes/addons/registry-rc.yaml
	I0327 19:47:21.975352  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:21.977427  785442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 19:47:21.979045  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 19:47:21.979081  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 19:47:21.979221  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube690712749 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 19:47:21.984153  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:21.984188  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:21.988661  785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 19:47:21.988699  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 19:47:21.988852  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube622246989 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 19:47:21.989429  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:21.990408  785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0327 19:47:21.990444  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0327 19:47:21.990755  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3615865372 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0327 19:47:21.994299  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 19:47:21.994339  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 19:47:21.994745  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2740868542 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 19:47:22.002100  785442 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 19:47:22.002127  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 19:47:22.002246  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2384368977 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 19:47:22.011696  785442 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 19:47:22.011754  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 19:47:22.012152  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube955068222 /etc/kubernetes/addons/registry-svc.yaml
	I0327 19:47:22.012633  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:22.012692  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:22.021101  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 19:47:22.021291  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3341825910 /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:47:22.025403  785442 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 19:47:22.025449  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 19:47:22.025581  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2979057734 /etc/kubernetes/addons/yakd-svc.yaml
	I0327 19:47:22.027691  785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 19:47:22.027730  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0327 19:47:22.027883  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3370427214 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 19:47:22.035598  785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 19:47:22.035742  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 19:47:22.035945  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2367739608 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 19:47:22.036852  785442 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 19:47:22.036879  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 19:47:22.036985  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4106039266 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 19:47:22.037188  785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 19:47:22.037232  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 19:47:22.037370  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube216792825 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 19:47:22.040511  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:22.040578  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:22.045417  785442 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 19:47:22.045450  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 19:47:22.045651  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3087722302 /etc/kubernetes/addons/registry-proxy.yaml
	I0327 19:47:22.053702  785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 19:47:22.053738  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 19:47:22.055454  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube328119600 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 19:47:22.055959  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:22.055986  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:22.060443  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:22.068523  785442 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 19:47:22.063780  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 19:47:22.079272  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 19:47:22.080126  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 19:47:22.083864  785442 out.go:177]   - Using image docker.io/busybox:stable
	I0327 19:47:22.080926  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 19:47:22.081000  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1088940883 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 19:47:22.081006  785442 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 19:47:22.084156  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:22.084257  785442 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 19:47:22.086049  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 19:47:22.086146  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 19:47:22.086286  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:22.086347  785442 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 19:47:22.086368  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 19:47:22.086582  785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 19:47:22.086619  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 19:47:22.087095  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2817974947 /etc/kubernetes/addons/yakd-dp.yaml
	I0327 19:47:22.087343  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2638489851 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 19:47:22.087553  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3466187947 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 19:47:22.087737  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1871747012 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 19:47:22.088229  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube72790727 /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 19:47:22.093931  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 19:47:22.095237  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 19:47:22.097844  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:22.097953  785442 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 19:47:22.097972  785442 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0327 19:47:22.097979  785442 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0327 19:47:22.098016  785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0327 19:47:22.127832  785442 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:47:22.127882  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 19:47:22.128012  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1571608221 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:47:22.149668  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 19:47:22.154758  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 19:47:22.154800  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 19:47:22.154970  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3882036622 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 19:47:22.165235  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 19:47:22.168283  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 19:47:22.169796  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 19:47:22.172197  785442 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 19:47:22.172233  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 19:47:22.172357  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube177075574 /etc/kubernetes/addons/ig-crd.yaml
	I0327 19:47:22.180919  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 19:47:22.181135  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2154554086 /etc/kubernetes/addons/storageclass.yaml
	I0327 19:47:22.200315  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:47:22.221714  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 19:47:22.224620  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 19:47:22.224651  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 19:47:22.224769  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1972881721 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 19:47:22.243948  785442 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 19:47:22.244000  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 19:47:22.244150  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3090195628 /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 19:47:22.305720  785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 19:47:22.305767  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 19:47:22.305948  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3317386515 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 19:47:22.317674  785442 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0327 19:47:22.327960  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 19:47:22.328006  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 19:47:22.328176  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1388280319 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 19:47:22.347049  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 19:47:22.355068  785442 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-15" to be "Ready" ...
	I0327 19:47:22.359014  785442 node_ready.go:49] node "ubuntu-20-agent-15" has status "Ready":"True"
	I0327 19:47:22.359044  785442 node_ready.go:38] duration metric: took 3.937317ms for node "ubuntu-20-agent-15" to be "Ready" ...
	I0327 19:47:22.359056  785442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:47:22.379141  785442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9hd8k" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:22.385381  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 19:47:22.385419  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 19:47:22.385542  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2768765113 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 19:47:22.437617  785442 start.go:948] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0327 19:47:22.496389  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 19:47:22.498207  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 19:47:22.499274  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1577108307 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 19:47:22.604378  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 19:47:22.604447  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 19:47:22.604611  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2203491235 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 19:47:22.648858  785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 19:47:22.649040  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 19:47:22.649229  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3492237834 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 19:47:22.867149  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 19:47:22.982202  785442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0327 19:47:22.988520  785442 addons.go:470] Verifying addon registry=true in "minikube"
	I0327 19:47:22.991474  785442 out.go:177] * Verifying registry addon...
	I0327 19:47:22.994085  785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 19:47:23.004990  785442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 19:47:23.005019  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:23.296803  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147081653s)
	I0327 19:47:23.296843  785442 addons.go:470] Verifying addon metrics-server=true in "minikube"
	I0327 19:47:23.349992  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256006951s)
	I0327 19:47:23.401666  785442 pod_ready.go:92] pod "coredns-76f75df574-9hd8k" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.401697  785442 pod_ready.go:81] duration metric: took 1.022523886s for pod "coredns-76f75df574-9hd8k" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.401712  785442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z26gp" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.410878  785442 pod_ready.go:92] pod "coredns-76f75df574-z26gp" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.410906  785442 pod_ready.go:81] duration metric: took 9.184406ms for pod "coredns-76f75df574-z26gp" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.410920  785442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.414129  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.248841198s)
	I0327 19:47:23.416861  785442 pod_ready.go:92] pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.416891  785442 pod_ready.go:81] duration metric: took 5.960565ms for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.416905  785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.427642  785442 pod_ready.go:92] pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.427677  785442 pod_ready.go:81] duration metric: took 10.762359ms for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.427694  785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.510720  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:23.547165  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.377314384s)
	I0327 19:47:23.550866  785442 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0327 19:47:23.574423  785442 pod_ready.go:92] pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.574446  785442 pod_ready.go:81] duration metric: took 146.743815ms for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.574457  785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj2pl" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.575392  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.407054783s)
	I0327 19:47:23.776180  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.42906421s)
	I0327 19:47:23.905675  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.705287333s)
	W0327 19:47:23.905725  785442 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 19:47:23.905756  785442 retry.go:31] will retry after 254.531798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 19:47:23.960987  785442 pod_ready.go:92] pod "kube-proxy-zj2pl" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:23.961023  785442 pod_ready.go:81] duration metric: took 386.55804ms for pod "kube-proxy-zj2pl" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.961037  785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:23.999104  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:24.161396  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 19:47:24.358940  785442 pod_ready.go:92] pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:24.358974  785442 pod_ready.go:81] duration metric: took 397.92767ms for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:24.358989  785442 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:24.499250  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:24.991920  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.124718235s)
	I0327 19:47:24.991954  785442 addons.go:470] Verifying addon csi-hostpath-driver=true in "minikube"
	I0327 19:47:24.993720  785442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 19:47:24.996862  785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 19:47:25.000362  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:25.002176  785442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 19:47:25.002199  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:25.498533  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:25.501801  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:25.998945  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:26.002911  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:26.366380  785442 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"False"
	I0327 19:47:26.499816  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:26.503146  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:26.941880  785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.780402496s)
	I0327 19:47:26.999994  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:27.002112  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:27.500049  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:27.502072  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:27.998741  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:28.002320  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:28.498926  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:28.502011  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:28.883652  785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 19:47:28.883825  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2740077225 /var/lib/minikube/google_application_credentials.json
	I0327 19:47:28.886031  785442 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"False"
	I0327 19:47:28.930154  785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 19:47:28.930315  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292447473 /var/lib/minikube/google_cloud_project
	I0327 19:47:28.942321  785442 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0327 19:47:28.942387  785442 host.go:66] Checking if "minikube" exists ...
	I0327 19:47:28.942924  785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0327 19:47:28.942945  785442 api_server.go:166] Checking apiserver status ...
	I0327 19:47:28.942981  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:28.964018  785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
	I0327 19:47:28.979942  785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
	I0327 19:47:28.980036  785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
	I0327 19:47:28.991793  785442 api_server.go:204] freezer state: "THAWED"
	I0327 19:47:28.991833  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:28.996386  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:28.996471  785442 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 19:47:29.002091  785442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 19:47:28.999564  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:29.001560  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:29.003707  785442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 19:47:29.005378  785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 19:47:29.005427  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 19:47:29.005619  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3399691076 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 19:47:29.016589  785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 19:47:29.016623  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 19:47:29.016723  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube459639426 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 19:47:29.026691  785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:47:29.026723  785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 19:47:29.026825  785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube74684384 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:47:29.035885  785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:47:29.454802  785442 addons.go:470] Verifying addon gcp-auth=true in "minikube"
	I0327 19:47:29.457748  785442 out.go:177] * Verifying gcp-auth addon...
	I0327 19:47:29.461272  785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 19:47:29.464709  785442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 19:47:29.464732  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:29.499972  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:29.503226  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:29.865919  785442 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"True"
	I0327 19:47:29.865947  785442 pod_ready.go:81] duration metric: took 5.506949661s for pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace to be "Ready" ...
	I0327 19:47:29.865960  785442 pod_ready.go:38] duration metric: took 7.506891255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:47:29.865984  785442 api_server.go:52] waiting for apiserver process to appear ...
	I0327 19:47:29.866070  785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:47:29.884958  785442 api_server.go:72] duration metric: took 8.058273533s to wait for apiserver process to appear ...
	I0327 19:47:29.884989  785442 api_server.go:88] waiting for apiserver healthz status ...
	I0327 19:47:29.885014  785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0327 19:47:29.889485  785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0327 19:47:29.890860  785442 api_server.go:141] control plane version: v1.29.3
	I0327 19:47:29.890889  785442 api_server.go:131] duration metric: took 5.891243ms to wait for apiserver health ...
	I0327 19:47:29.890901  785442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 19:47:29.900804  785442 system_pods.go:59] 18 kube-system pods found
	I0327 19:47:29.900842  785442 system_pods.go:61] "coredns-76f75df574-9hd8k" [a4783215-45d9-4bd8-8362-a4a8c6c24223] Running
	I0327 19:47:29.900849  785442 system_pods.go:61] "coredns-76f75df574-z26gp" [60b43498-08a2-4e5e-a8f9-7828b65d047f] Running
	I0327 19:47:29.900856  785442 system_pods.go:61] "csi-hostpath-attacher-0" [df2fab58-2a5b-4139-b167-ce8300067ee0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 19:47:29.900865  785442 system_pods.go:61] "csi-hostpath-resizer-0" [7f0e4f91-6759-411a-b014-114732a72381] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 19:47:29.900875  785442 system_pods.go:61] "csi-hostpathplugin-gwdj5" [29cdfc20-973f-4a21-bc62-db14b8c63eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 19:47:29.900881  785442 system_pods.go:61] "etcd-ubuntu-20-agent-15" [34f7260d-c13b-43f9-a357-e40ba7a0b538] Running
	I0327 19:47:29.900891  785442 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-15" [83ecd64c-552f-47c9-994d-0d6e0fd4aff8] Running
	I0327 19:47:29.900897  785442 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-15" [d14c2b5d-fe8c-4bb0-8ee6-090e940b87f5] Running
	I0327 19:47:29.900905  785442 system_pods.go:61] "kube-proxy-zj2pl" [7f4fd90b-fe59-4d82-bc93-6bf1e1f61698] Running
	I0327 19:47:29.900911  785442 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-15" [c6819fb1-7b50-454c-a8fc-911139e455a1] Running
	I0327 19:47:29.900923  785442 system_pods.go:61] "metrics-server-69cf46c98-99lnl" [6d4266fb-20c3-437e-b8c3-33bc953b1539] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 19:47:29.900929  785442 system_pods.go:61] "nvidia-device-plugin-daemonset-dvfbr" [3a81a4f4-da07-4e16-bad5-9c7c5139b5ab] Running
	I0327 19:47:29.900934  785442 system_pods.go:61] "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 19:47:29.900968  785442 system_pods.go:61] "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 19:47:29.900985  785442 system_pods.go:61] "snapshot-controller-58dbcc7b99-d8hzj" [eab096b6-514d-48e3-aed2-f1dfecf4ff99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:47:29.901002  785442 system_pods.go:61] "snapshot-controller-58dbcc7b99-njnrq" [7bad25f0-ddd1-4b97-8155-381a3c964b66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:47:29.901012  785442 system_pods.go:61] "storage-provisioner" [20e18899-eefb-4036-ac1e-6522ce4203cf] Running
	I0327 19:47:29.901020  785442 system_pods.go:61] "tiller-deploy-7b677967b9-7gsf8" [c4e50e3b-2e4a-4dee-aa77-fcb4e8acd261] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0327 19:47:29.901031  785442 system_pods.go:74] duration metric: took 10.124352ms to wait for pod list to return data ...
	I0327 19:47:29.901045  785442 default_sa.go:34] waiting for default service account to be created ...
	I0327 19:47:29.903710  785442 default_sa.go:45] found service account: "default"
	I0327 19:47:29.903739  785442 default_sa.go:55] duration metric: took 2.68335ms for default service account to be created ...
	I0327 19:47:29.903750  785442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 19:47:29.914072  785442 system_pods.go:86] 18 kube-system pods found
	I0327 19:47:29.914114  785442 system_pods.go:89] "coredns-76f75df574-9hd8k" [a4783215-45d9-4bd8-8362-a4a8c6c24223] Running
	I0327 19:47:29.914123  785442 system_pods.go:89] "coredns-76f75df574-z26gp" [60b43498-08a2-4e5e-a8f9-7828b65d047f] Running
	I0327 19:47:29.914135  785442 system_pods.go:89] "csi-hostpath-attacher-0" [df2fab58-2a5b-4139-b167-ce8300067ee0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 19:47:29.914145  785442 system_pods.go:89] "csi-hostpath-resizer-0" [7f0e4f91-6759-411a-b014-114732a72381] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 19:47:29.914170  785442 system_pods.go:89] "csi-hostpathplugin-gwdj5" [29cdfc20-973f-4a21-bc62-db14b8c63eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 19:47:29.914178  785442 system_pods.go:89] "etcd-ubuntu-20-agent-15" [34f7260d-c13b-43f9-a357-e40ba7a0b538] Running
	I0327 19:47:29.914186  785442 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-15" [83ecd64c-552f-47c9-994d-0d6e0fd4aff8] Running
	I0327 19:47:29.914194  785442 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-15" [d14c2b5d-fe8c-4bb0-8ee6-090e940b87f5] Running
	I0327 19:47:29.914200  785442 system_pods.go:89] "kube-proxy-zj2pl" [7f4fd90b-fe59-4d82-bc93-6bf1e1f61698] Running
	I0327 19:47:29.914206  785442 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-15" [c6819fb1-7b50-454c-a8fc-911139e455a1] Running
	I0327 19:47:29.914216  785442 system_pods.go:89] "metrics-server-69cf46c98-99lnl" [6d4266fb-20c3-437e-b8c3-33bc953b1539] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0327 19:47:29.914233  785442 system_pods.go:89] "nvidia-device-plugin-daemonset-dvfbr" [3a81a4f4-da07-4e16-bad5-9c7c5139b5ab] Running
	I0327 19:47:29.914242  785442 system_pods.go:89] "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 19:47:29.914251  785442 system_pods.go:89] "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 19:47:29.914346  785442 system_pods.go:89] "snapshot-controller-58dbcc7b99-d8hzj" [eab096b6-514d-48e3-aed2-f1dfecf4ff99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:47:29.914405  785442 system_pods.go:89] "snapshot-controller-58dbcc7b99-njnrq" [7bad25f0-ddd1-4b97-8155-381a3c964b66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:47:29.914452  785442 system_pods.go:89] "storage-provisioner" [20e18899-eefb-4036-ac1e-6522ce4203cf] Running
	I0327 19:47:29.914473  785442 system_pods.go:89] "tiller-deploy-7b677967b9-7gsf8" [c4e50e3b-2e4a-4dee-aa77-fcb4e8acd261] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0327 19:47:29.914490  785442 system_pods.go:126] duration metric: took 10.731802ms to wait for k8s-apps to be running ...
	I0327 19:47:29.914512  785442 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 19:47:29.914567  785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:47:29.942282  785442 system_svc.go:56] duration metric: took 27.755687ms WaitForService to wait for kubelet
	I0327 19:47:29.942325  785442 kubeadm.go:576] duration metric: took 8.115647513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:47:29.942354  785442 node_conditions.go:102] verifying NodePressure condition ...
	I0327 19:47:29.959218  785442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0327 19:47:29.959304  785442 node_conditions.go:123] node cpu capacity is 8
	I0327 19:47:29.959327  785442 node_conditions.go:105] duration metric: took 16.965531ms to run NodePressure ...
	I0327 19:47:29.959339  785442 start.go:240] waiting for startup goroutines ...
	I0327 19:47:29.965407  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:30.000248  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:30.003457  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:30.465976  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:30.499712  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:30.503073  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:30.965781  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:30.999553  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:31.003252  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:31.464342  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:31.500027  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:31.502057  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:31.965017  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:32.000149  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:32.003855  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:32.465127  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:32.500585  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:32.502423  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:32.984085  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:32.999821  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:33.003753  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:33.465160  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:33.501957  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:33.502669  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:33.965799  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:34.000233  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:34.002577  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:34.465952  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:34.500011  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:34.502439  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:34.965003  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:34.999680  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:35.002996  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:35.466026  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:35.499724  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:35.503091  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:35.965421  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:35.999921  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:36.001788  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:36.465795  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:36.499400  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:47:36.502597  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:36.965306  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:36.999817  785442 kapi.go:107] duration metric: took 14.005731704s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 19:47:37.002356  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:37.465544  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:37.502782  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:37.965762  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:38.002699  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:38.465250  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:38.503880  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:38.965638  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:39.002900  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:39.465960  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:39.503258  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:39.965502  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:40.003531  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:40.466095  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:40.503329  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:40.965218  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:41.003263  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:41.464507  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:41.503245  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:41.965204  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:42.002557  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:42.465781  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:42.502808  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:42.965804  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:43.003271  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:43.465528  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:43.503224  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:43.966393  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:44.002422  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:44.465657  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:44.502362  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:44.964949  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:45.002610  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:45.465932  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:45.502730  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:45.965826  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:46.042758  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:46.465380  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:46.502024  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:46.985346  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:47.002425  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:47.465161  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:47.503315  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:47.965294  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:48.001379  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:48.465964  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:48.503527  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:48.965071  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:49.003131  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:49.465603  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:49.502630  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:49.965149  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:50.002398  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:50.465413  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:50.502976  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:50.965274  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:51.006524  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:51.465729  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:51.502588  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:51.966434  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:52.003241  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:52.465100  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:52.503476  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:52.965413  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:53.002788  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:53.465387  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:53.502160  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:53.965722  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:54.002888  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:54.464765  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:54.503482  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:54.965083  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:55.002875  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:47:55.465013  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:55.504953  785442 kapi.go:107] duration metric: took 30.508088364s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 19:47:55.965406  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:56.464818  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:56.964808  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:57.465334  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:57.965012  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:58.465596  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:58.965238  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:59.465171  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:47:59.964535  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:00.464946  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:00.965388  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:01.464840  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:01.965898  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:02.465774  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:02.965113  785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:48:03.465471  785442 kapi.go:107] duration metric: took 34.004198546s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 19:48:03.467537  785442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0327 19:48:03.469094  785442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 19:48:03.470521  785442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 19:48:03.472278  785442 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, helm-tiller, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0327 19:48:03.474121  785442 addons.go:505] duration metric: took 41.684345361s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass metrics-server storage-provisioner helm-tiller yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0327 19:48:03.474176  785442 start.go:245] waiting for cluster config update ...
	I0327 19:48:03.474204  785442 start.go:254] writing updated cluster config ...
	I0327 19:48:03.474467  785442 exec_runner.go:51] Run: rm -f paused
	I0327 19:48:03.521304  785442 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 19:48:03.523261  785442 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Thu 2024-02-29 08:28:27 UTC, end at Wed 2024-03-27 19:51:28 UTC. --
	Mar 27 19:47:53 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:47:53.634640080Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" spanID=e703d01138cf4bce traceID=2d8c6955bfdb19adeac46ceeaeaddcee
	Mar 27 19:47:54 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:47:54Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Mar 27 19:48:01 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e6cdc336421fd4b34195732f1a8c8fc9cce9cf94cb1da1555840271e9f27f53/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Mar 27 19:48:01 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:01.716663193Z" level=warning msg="reference for unknown type: " digest="sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" spanID=c19e2df25da3743a traceID=3b48c0a7ad70ec54b0fe3a3bb7c26e23
	Mar 27 19:48:02 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:02Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Mar 27 19:48:07 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:07Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
	Mar 27 19:48:08 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:08.814164791Z" level=info msg="ignoring event" container=77103e5616e629a5297ec4603e8f341512ea16af8f930b073addee727cd20ea9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:48:09 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:09.241489344Z" level=error msg="Failed to compute size of container rootfs bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae: mount does not exist" spanID=ca5094ed40995b00 traceID=0860beeb79332bc55e305fb55d6ebe0d
	Mar 27 19:48:09 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:09Z" level=error msg="Error response from daemon: No such container: bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae Failed to get stats from container bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae"
	Mar 27 19:48:14 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e31ea2353b274ecb5c1df789ebe1bc17a867d981bb972889f2eb7de254a42938/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Mar 27 19:48:14 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:14Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:latest: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest"
	Mar 27 19:48:41 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:41Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
	Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.708105685Z" level=error msg="stream copy error: reading from a closed fifo"
	Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.708105787Z" level=error msg="stream copy error: reading from a closed fifo"
	Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.710301308Z" level=error msg="Error running exec 7eb62454195deede3939c2e499360a60b8bda066cd8548b4ffd8b0c52cfafa90 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" spanID=fbe1b001366d1e6a traceID=906081f0ead8899dc1832d7daaab7043
	Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.894276490Z" level=info msg="ignoring event" container=4c5f888063aba4bf2b4443b99505e49421712c2bec31f8a110e6ce0531233fad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:48:44 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:44.816535808Z" level=info msg="ignoring event" container=20cda7b4d3cec7047690a051c164b65830c7141e58051470fb4e5b586e6590ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:48:44 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:44.825968337Z" level=warning msg="failed to close stdin: task 20cda7b4d3cec7047690a051c164b65830c7141e58051470fb4e5b586e6590ef not found: not found"
	Mar 27 19:48:46 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:46.864341409Z" level=info msg="ignoring event" container=e31ea2353b274ecb5c1df789ebe1bc17a867d981bb972889f2eb7de254a42938 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:49:23 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:49:23Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
	Mar 27 19:49:24 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:49:24.820864008Z" level=info msg="ignoring event" container=49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:50:48 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:50:48Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
	Mar 27 19:50:49 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:50:49.834401828Z" level=info msg="ignoring event" container=e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:51:27 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:51:27.923245251Z" level=info msg="ignoring event" container=606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 27 19:51:28 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:51:28.050666359Z" level=info msg="ignoring event" container=943e6bd23bafc32f2a23d65ac3f717e3b702a8ca9d092f5a50bacd51b6d75545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e6b3e5d9b741c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff                            40 seconds ago      Exited              gadget                                   5                   e093f01a76c61       gadget-vpxgx
	7dd753def982b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 3 minutes ago       Running             gcp-auth                                 0                   7e6cdc336421f       gcp-auth-7d69788767-fglgd
	cf0e7f4d6f99e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	bf82fabdb6deb       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          3 minutes ago       Running             csi-provisioner                          0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	4856bc6275ab7       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            3 minutes ago       Running             liveness-probe                           0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	79dda4afae9b9       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           3 minutes ago       Running             hostpath                                 0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	1bf858930ae13       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                3 minutes ago       Running             node-driver-registrar                    0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	a5058a4a60781       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   059b87204b9a0       csi-hostpathplugin-gwdj5
	a61d2bf4ea597       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago       Running             csi-resizer                              0                   4c2a8faa4be44       csi-hostpath-resizer-0
	191a173649496       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             3 minutes ago       Running             csi-attacher                             0                   e56c2d1c84c54       csi-hostpath-attacher-0
	930ad351d2c9f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      3 minutes ago       Running             volume-snapshot-controller               0                   2c3b34cba55e5       snapshot-controller-58dbcc7b99-njnrq
	e293012664f2f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      3 minutes ago       Running             volume-snapshot-controller               0                   e916d5c7ce86c       snapshot-controller-58dbcc7b99-d8hzj
	baa4c85c6221e       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        3 minutes ago       Running             yakd                                     0                   be2c04cad9a4a       yakd-dashboard-9947fc6bf-bsvfh
	405db64cb85cb       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       3 minutes ago       Running             local-path-provisioner                   0                   eb0704d9bc5fc       local-path-provisioner-78b46b4d5c-kfxq8
	c6b840a620ed3       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago       Running             tiller                                   0                   76aaf3717d682       tiller-deploy-7b677967b9-7gsf8
	f8d17fababe41       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              3 minutes ago       Running             registry-proxy                           0                   3615e2b78473b       registry-proxy-z78qc
	ba2eefde6b1f7       registry.k8s.io/metrics-server/metrics-server@sha256:1c0419326500f1704af580d12a579671b2c3a06a8aa918cd61d0a35fb2d6b3ce                        3 minutes ago       Running             metrics-server                           0                   b21b49ee6a24f       metrics-server-69cf46c98-99lnl
	606866de2fb6b       registry@sha256:fb9c9aef62af3955f6014613456551c92e88a67dcf1fc51f5f91bcbd1832813f                                                             3 minutes ago       Unknown             registry                                 0                   943e6bd23bafc       registry-2hmfs
	9875ce84b510a       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               3 minutes ago       Running             cloud-spanner-emulator                   0                   d30f17a6fb992       cloud-spanner-emulator-5446596998-j5qwr
	a54c4d62a3b16       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   9560d5b62bed9       nvidia-device-plugin-daemonset-dvfbr
	7d6b506e168d1       6e38f40d628db                                                                                                                                4 minutes ago       Running             storage-provisioner                      0                   5b6c4d5dd4c98       storage-provisioner
	8a99b08e16c22       cbb01a7bd410d                                                                                                                                4 minutes ago       Running             coredns                                  0                   2faf66181e661       coredns-76f75df574-9hd8k
	e56210d620d68       a1d263b5dc5b0                                                                                                                                4 minutes ago       Running             kube-proxy                               0                   d3594f21d5f3a       kube-proxy-zj2pl
	5ed77f086ed62       6052a25da3f97                                                                                                                                4 minutes ago       Running             kube-controller-manager                  0                   c5f09be0b4887       kube-controller-manager-ubuntu-20-agent-15
	f7f6eba592ba1       3861cfcd7c04c                                                                                                                                4 minutes ago       Running             etcd                                     0                   6484f5fccf787       etcd-ubuntu-20-agent-15
	5d7c377589897       8c390d98f50c0                                                                                                                                4 minutes ago       Running             kube-scheduler                           0                   93922e3ed345f       kube-scheduler-ubuntu-20-agent-15
	d580fffa011f9       39f995c9f1996                                                                                                                                4 minutes ago       Running             kube-apiserver                           0                   28698b144e0fd       kube-apiserver-ubuntu-20-agent-15
	
	
	==> coredns [8a99b08e16c2] <==
	[ERROR] plugin/errors: 2 5413775293664718515.2396988378427081446. HINFO: read udp 10.244.0.4:56081->169.254.169.254:53: i/o timeout
	[ERROR] plugin/errors: 2 5413775293664718515.2396988378427081446. HINFO: read udp 10.244.0.4:45622->169.254.169.254:53: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42997 - 34889 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 6.001289416s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:38035->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:38164 - 1255 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 6.002291992s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:51981->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:34697 - 14013 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 4.001487388s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:60245->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:33383 - 51807 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000911871s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:47173->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:33516 - 63347 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000715458s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:33676->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:56508 - 28845 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000350312s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:60360->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:40670 - 25520 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.00071576s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:55322->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:58083 - 53156 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000681164s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:50925->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:36019 - 41800 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000710297s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:50693->169.254.169.254:53: i/o timeout
	[INFO] 127.0.0.1:36036 - 9426 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.001175497s
	[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:57632->169.254.169.254:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-15
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-15
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T19_47_08_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-15
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-15"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 19:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-15
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 19:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 19:48:41 +0000   Wed, 27 Mar 2024 19:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 19:48:41 +0000   Wed, 27 Mar 2024 19:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 19:48:41 +0000   Wed, 27 Mar 2024 19:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 19:48:41 +0000   Wed, 27 Mar 2024 19:47:05 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
	Addresses:
	  InternalIP:  10.128.15.240
	  Hostname:    ubuntu-20-agent-15
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                b37db8a4-1476-dab1-7f0f-0d5cfb4ed197
	  Boot ID:                    947a0fb0-1897-4d21-b854-0f0a395b1b8e
	  Kernel Version:             5.15.0-1054-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-j5qwr       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  gadget                      gadget-vpxgx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  gcp-auth                    gcp-auth-7d69788767-fglgd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 coredns-76f75df574-9hd8k                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m7s
	  kube-system                 csi-hostpath-attacher-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 csi-hostpath-resizer-0                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 csi-hostpathplugin-gwdj5                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 etcd-ubuntu-20-agent-15                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-apiserver-ubuntu-20-agent-15             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-15    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-zj2pl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-ubuntu-20-agent-15             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 metrics-server-69cf46c98-99lnl                100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m5s
	  kube-system                 nvidia-device-plugin-daemonset-dvfbr          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 registry-proxy-z78qc                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 snapshot-controller-58dbcc7b99-d8hzj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 snapshot-controller-58dbcc7b99-njnrq          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 tiller-deploy-7b677967b9-7gsf8                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  local-path-storage          local-path-provisioner-78b46b4d5c-kfxq8       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-bsvfh                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             498Mi (1%!)(MISSING)  426Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m5s   kube-proxy       
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m7s   node-controller  Node ubuntu-20-agent-15 event: Registered Node ubuntu-20-agent-15 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 44 49 e3 ed 41 08 06
	[  +0.200060] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 04 78 34 4e d2 08 06
	[ +13.545955] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e d7 a1 87 4f 09 08 06
	[  +2.303509] IPv4: martian source 10.244.0.1 from 10.244.0.11, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c3 43 36 21 24 08 06
	[  +7.324932] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 da 12 b6 8d 71 08 06
	[  +0.042238] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa a9 aa e2 5d 46 08 06
	[  +3.972186] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 1b 52 bb 9e d0 08 06
	[  +0.018071] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 5a 64 50 1e 3c 08 06
	[  +1.970234] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 4c fd 79 4f 17 08 06
	[  +0.228317] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 7e 62 3c 09 0a 08 06
	[  +0.745030] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 1b c3 96 35 e3 08 06
	[Mar27 19:48] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be b1 1b 5a 71 f7 08 06
	[ +11.874368] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 1f 65 d8 05 1f 08 06
	
	
	==> etcd [f7f6eba592ba] <==
	{"level":"info","ts":"2024-03-27T19:47:04.182361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-27T19:47:04.182406Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-27T19:47:04.182417Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-27T19:47:04.182883Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"10.128.15.240:2380"}
	{"level":"info","ts":"2024-03-27T19:47:04.182914Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"10.128.15.240:2380"}
	{"level":"info","ts":"2024-03-27T19:47:04.183197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 switched to configuration voters=(1436903241728707736)"}
	{"level":"info","ts":"2024-03-27T19:47:04.183277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","added-peer-id":"13f0e7e2a3d8cc98","added-peer-peer-urls":["https://10.128.15.240:2380"]}
	{"level":"info","ts":"2024-03-27T19:47:04.367204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T19:47:04.367258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T19:47:04.36729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgPreVoteResp from 13f0e7e2a3d8cc98 at term 1"}
	{"level":"info","ts":"2024-03-27T19:47:04.367305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T19:47:04.367313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgVoteResp from 13f0e7e2a3d8cc98 at term 2"}
	{"level":"info","ts":"2024-03-27T19:47:04.367325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became leader at term 2"}
	{"level":"info","ts":"2024-03-27T19:47:04.36735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 13f0e7e2a3d8cc98 elected leader 13f0e7e2a3d8cc98 at term 2"}
	{"level":"info","ts":"2024-03-27T19:47:04.368461Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"13f0e7e2a3d8cc98","local-member-attributes":"{Name:ubuntu-20-agent-15 ClientURLs:[https://10.128.15.240:2379]}","request-path":"/0/members/13f0e7e2a3d8cc98/attributes","cluster-id":"3112ce273fbe8262","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T19:47:04.368516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T19:47:04.368665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T19:47:04.368644Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T19:47:04.36889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T19:47:04.36891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T19:47:04.369417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T19:47:04.369646Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T19:47:04.369674Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T19:47:04.37093Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.240:2379"}
	{"level":"info","ts":"2024-03-27T19:47:04.37128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [7dd753def982] <==
	2024/03/27 19:48:02 GCP Auth Webhook started!
	2024/03/27 19:48:13 Ready to marshal response ...
	2024/03/27 19:48:13 Ready to write response ...
	2024/03/27 19:48:32 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com: i/o timeout
	
	
	==> kernel <==
	 19:51:29 up  3:33,  0 users,  load average: 0.47, 1.04, 1.55
	Linux ubuntu-20-agent-15 5.15.0-1054-gcp #62~20.04.1-Ubuntu SMP Wed Mar 13 20:29:29 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [d580fffa011f] <==
	I0327 19:47:23.746503       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0327 19:47:23.792823       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:47:23.792860       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 19:47:23.831396       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:47:23.831450       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 19:47:23.859928       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:47:23.859970       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 19:47:24.112540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 19:47:24.112661       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0327 19:47:24.112673       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0327 19:47:24.112540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 19:47:24.112717       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0327 19:47:24.114770       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0327 19:47:24.902992       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.67.148"}
	I0327 19:47:24.911030       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0327 19:47:24.966463       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.124.148"}
	I0327 19:47:29.367108       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.179.137"}
	I0327 19:47:29.397815       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0327 19:47:34.530691       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 19:47:34.530770       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0327 19:47:34.531246       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.60.156:443: connect: connection refused
	E0327 19:47:34.532700       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.60.156:443: connect: connection refused
	I0327 19:47:34.571537       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [5ed77f086ed6] <==
	I0327 19:47:52.222094       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0327 19:47:52.322672       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 19:47:52.909129       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:47:52.948323       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:47:53.067973       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:47:53.075811       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:47:53.080626       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:47:53.080786       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0327 19:47:53.093060       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:47:53.911053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="7.956289ms"
	I0327 19:47:53.911192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="65.803µs"
	I0327 19:47:53.915923       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:47:53.923594       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:47:53.928138       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:47:53.928309       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0327 19:47:53.983237       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:48:01.740543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.036168ms"
	I0327 19:48:01.740638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="54.475µs"
	I0327 19:48:03.136673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="5.84073ms"
	I0327 19:48:03.136796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="66.019µs"
	I0327 19:48:23.013624       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:48:23.014077       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:48:23.040682       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 19:48:23.041924       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 19:51:27.879611       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="17.153µs"
	
	
	==> kube-proxy [e56210d620d6] <==
	I0327 19:47:22.933963       1 server_others.go:72] "Using iptables proxy"
	I0327 19:47:23.001008       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["10.128.15.240"]
	I0327 19:47:23.062624       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 19:47:23.062662       1 server_others.go:168] "Using iptables Proxier"
	I0327 19:47:23.070046       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 19:47:23.070070       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 19:47:23.070113       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 19:47:23.070342       1 server.go:865] "Version info" version="v1.29.3"
	I0327 19:47:23.070358       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 19:47:23.072911       1 config.go:188] "Starting service config controller"
	I0327 19:47:23.072928       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 19:47:23.072950       1 config.go:97] "Starting endpoint slice config controller"
	I0327 19:47:23.072954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 19:47:23.073857       1 config.go:315] "Starting node config controller"
	I0327 19:47:23.073876       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 19:47:23.173870       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 19:47:23.173940       1 shared_informer.go:318] Caches are synced for service config
	I0327 19:47:23.174283       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5d7c37758989] <==
	E0327 19:47:05.542321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 19:47:05.542326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 19:47:05.542277       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 19:47:05.542351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 19:47:05.542339       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 19:47:05.542381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 19:47:06.386532       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 19:47:06.386589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 19:47:06.402843       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 19:47:06.402881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 19:47:06.415586       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 19:47:06.415623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 19:47:06.461511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 19:47:06.461528       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 19:47:06.461553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 19:47:06.461553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 19:47:06.507026       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 19:47:06.507066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 19:47:06.526394       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 19:47:06.526448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 19:47:06.636562       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 19:47:06.636610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 19:47:06.680281       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 19:47:06.680323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0327 19:47:07.138639       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Thu 2024-02-29 08:28:27 UTC, end at Wed 2024-03-27 19:51:29 UTC. --
	Mar 27 19:50:10 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:10.746957  787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
	Mar 27 19:50:10 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:10.747428  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:50:21 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:21.746662  787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
	Mar 27 19:50:21 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:21.747110  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:50:34 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:34.746808  787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
	Mar 27 19:50:34 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:34.747259  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:50:48 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:48.747583  787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
	Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:50.285686  787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
	Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:50.286150  787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
	Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:50.286929  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:50:51 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:51.307700  787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
	Mar 27 19:50:51 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:51.308055  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:50:54 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:54.115393  787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
	Mar 27 19:50:54 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:54.116059  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:51:08 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:08.747310  787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
	Mar 27 19:51:08 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:08.747971  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:51:23 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:23.747044  787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
	Mar 27 19:51:23 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:23.747454  787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.259499  787231 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndrmj\" (UniqueName: \"kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj\") pod \"7e30047c-df90-44cb-b9a2-98b6574dd90f\" (UID: \"7e30047c-df90-44cb-b9a2-98b6574dd90f\") "
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.261467  787231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj" (OuterVolumeSpecName: "kube-api-access-ndrmj") pod "7e30047c-df90-44cb-b9a2-98b6574dd90f" (UID: "7e30047c-df90-44cb-b9a2-98b6574dd90f"). InnerVolumeSpecName "kube-api-access-ndrmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.360696  787231 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ndrmj\" (UniqueName: \"kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.759334  787231 scope.go:117] "RemoveContainer" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.782057  787231 scope.go:117] "RemoveContainer" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:28.783293  787231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
	Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.783348  787231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"} err="failed to get container status \"606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
	
	
	==> storage-provisioner [7d6b506e168d] <==
	I0327 19:47:24.192155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 19:47:24.208572       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 19:47:24.208624       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 19:47:24.218949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 19:47:24.219876       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d!
	I0327 19:47:24.220776       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a637a87-f8d7-45ab-a0c1-c98ca435982f", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d became leader
	I0327 19:47:24.321945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (205.95s)

                                                
                                    

Test pass (109/174)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.6
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 3.74
15 TestDownloadOnly/v1.29.3/binaries 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.13
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-beta.0/json-events 3.07
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.54
31 TestOffline 49.05
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 69.29
40 TestAddons/parallel/InspektorGadget 11.48
41 TestAddons/parallel/MetricsServer 5.4
42 TestAddons/parallel/HelmTiller 9.21
44 TestAddons/parallel/CSI 47.53
45 TestAddons/parallel/Headlamp 10.52
46 TestAddons/parallel/CloudSpanner 5.3
48 TestAddons/parallel/NvidiaDevicePlugin 5.25
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 10.77
55 TestCertExpiration 235.71
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 31.27
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.03
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
73 TestFunctional/serial/MinikubeKubectlCmd 0.13
74 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
75 TestFunctional/serial/ExtraConfig 35.57
76 TestFunctional/serial/ComponentHealth 0.07
77 TestFunctional/serial/LogsCmd 0.86
78 TestFunctional/serial/LogsFileCmd 0.89
79 TestFunctional/serial/InvalidService 4.07
81 TestFunctional/parallel/ConfigCmd 0.37
82 TestFunctional/parallel/DashboardCmd 9.31
83 TestFunctional/parallel/DryRun 0.2
84 TestFunctional/parallel/InternationalLanguage 0.1
85 TestFunctional/parallel/StatusCmd 0.5
88 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
89 TestFunctional/parallel/ProfileCmd/profile_list 0.26
90 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
92 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
93 TestFunctional/parallel/ServiceCmd/List 0.35
94 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
95 TestFunctional/parallel/ServiceCmd/HTTPS 0.17
96 TestFunctional/parallel/ServiceCmd/Format 0.17
97 TestFunctional/parallel/ServiceCmd/URL 0.17
98 TestFunctional/parallel/ServiceCmdConnect 8.33
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 23.59
113 TestFunctional/parallel/MySQL 21.96
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.35
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.54
122 TestFunctional/parallel/NodeLabels 0.06
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 0.42
128 TestFunctional/parallel/License 0.15
129 TestFunctional/delete_addon-resizer_images 0.03
130 TestFunctional/delete_my-image_image 0.01
131 TestFunctional/delete_minikube_cached_images 0.02
136 TestImageBuild/serial/Setup 14.44
137 TestImageBuild/serial/NormalBuild 1.05
138 TestImageBuild/serial/BuildWithBuildArg 0.72
139 TestImageBuild/serial/BuildWithDockerIgnore 0.49
140 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.51
144 TestJSONOutput/start/Command 43.19
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.51
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.42
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 10.4
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.24
172 TestMainNoArgs 0.06
173 TestMinikubeProfile 39.28
181 TestPause/serial/Start 30.12
182 TestPause/serial/SecondStartNoReconfiguration 31.4
183 TestPause/serial/Pause 0.48
184 TestPause/serial/VerifyStatus 0.15
185 TestPause/serial/Unpause 0.41
186 TestPause/serial/PauseAgain 0.55
187 TestPause/serial/DeletePaused 6.14
188 TestPause/serial/VerifyDeletedResources 0.08
202 TestRunningBinaryUpgrade 68.94
204 TestStoppedBinaryUpgrade/Setup 1.37
205 TestStoppedBinaryUpgrade/Upgrade 51
206 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
207 TestKubernetesUpgrade 319.14
x
+
TestDownloadOnly/v1.20.0/json-events (21.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (21.603701949s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (76.31ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|----------------|---------------------|----------|
	| Command |              Args              | Profile  |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |          |
	|         | -p minikube --force            |          |         |                |                     |          |
	|         | --alsologtostderr              |          |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |                |                     |          |
	|         | --container-runtime=docker     |          |         |                |                     |          |
	|         | --driver=none                  |          |         |                |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |                |                     |          |
	|---------|--------------------------------|----------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:45:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:45:34.711746  778339 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:45:34.712061  778339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:45:34.712073  778339 out.go:304] Setting ErrFile to fd 2...
	I0327 19:45:34.712077  778339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:45:34.712289  778339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	W0327 19:45:34.712414  778339 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17735-771440/.minikube/config/config.json: open /home/jenkins/minikube-integration/17735-771440/.minikube/config/config.json: no such file or directory
	I0327 19:45:34.712984  778339 out.go:298] Setting JSON to true
	I0327 19:45:34.714102  778339 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12473,"bootTime":1711556262,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:45:34.714177  778339 start.go:139] virtualization: kvm guest
	I0327 19:45:34.716917  778339 out.go:97] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:45:34.717071  778339 notify.go:220] Checking for updates...
	W0327 19:45:34.717117  778339 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:45:34.719153  778339 out.go:169] MINIKUBE_LOCATION=17735
	I0327 19:45:34.721178  778339 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:45:34.722655  778339 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:45:34.724146  778339 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:45:34.725584  778339 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 19:45:34.728423  778339 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 19:45:34.728723  778339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:45:34.741956  778339 out.go:97] Using the none driver based on user configuration
	I0327 19:45:34.741996  778339 start.go:297] selected driver: none
	I0327 19:45:34.742005  778339 start.go:901] validating driver "none" against <nil>
	I0327 19:45:34.742035  778339 start.go:1733] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0327 19:45:34.742458  778339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 19:45:34.742915  778339 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 19:45:34.743113  778339 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 19:45:34.743200  778339 cni.go:84] Creating CNI manager for ""
	I0327 19:45:34.743220  778339 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 19:45:34.743292  778339 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseI
nterval:1m0s}
	I0327 19:45:34.745126  778339 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0327 19:45:34.745500  778339 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json ...
	I0327 19:45:34.745541  778339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json: {Name:mkc12f016488e18252a34aa57adffbeb5566b2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:45:34.745716  778339 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 19:45:34.745990  778339 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0327 19:45:34.746005  778339 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.20.0/kubelet
	I0327 19:45:34.746024  778339 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (3.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (3.743462604s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (3.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
--- PASS: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (80.340674ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|----------------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force            |          |         |                |                     |                     |
	|         | --alsologtostderr              |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |                |                     |                     |
	|         | --container-runtime=docker     |          |         |                |                     |                     |
	|         | --driver=none                  |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |                |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force            |          |         |                |                     |                     |
	|         | --alsologtostderr              |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |          |         |                |                     |                     |
	|         | --container-runtime=docker     |          |         |                |                     |                     |
	|         | --driver=none                  |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |                |                     |                     |
	|---------|--------------------------------|----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:45:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:45:56.656200  778520 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:45:56.656340  778520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:45:56.656357  778520 out.go:304] Setting ErrFile to fd 2...
	I0327 19:45:56.656362  778520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:45:56.656569  778520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	I0327 19:45:56.657135  778520 out.go:298] Setting JSON to true
	I0327 19:45:56.658159  778520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12495,"bootTime":1711556262,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:45:56.658239  778520 start.go:139] virtualization: kvm guest
	I0327 19:45:56.660465  778520 out.go:97] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:45:56.661984  778520 out.go:169] MINIKUBE_LOCATION=17735
	W0327 19:45:56.660590  778520 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:45:56.660682  778520 notify.go:220] Checking for updates...
	I0327 19:45:56.664791  778520 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:45:56.666237  778520 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:45:56.667632  778520 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:45:56.668909  778520 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (3.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (3.071602334s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (3.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (73.625021ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 | Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force                 |          |         |                |                     |                     |
	|         | --alsologtostderr                   |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |          |         |                |                     |                     |
	|         | --container-runtime=docker          |          |         |                |                     |                     |
	|         | --driver=none                       |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm              |          |         |                |                     |                     |
	| delete  | --all                               | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| delete  | -p minikube                         | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
	| start   | -o=json --download-only             | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC |                     |
	|         | -p minikube --force                 |          |         |                |                     |                     |
	|         | --alsologtostderr                   |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |          |         |                |                     |                     |
	|         | --container-runtime=docker          |          |         |                |                     |                     |
	|         | --driver=none                       |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm              |          |         |                |                     |                     |
	| delete  | --all                               | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| delete  | -p minikube                         | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
	| start   | -o=json --download-only -p          | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC |                     |
	|         | minikube --force --alsologtostderr  |          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |          |         |                |                     |                     |
	|         | --container-runtime=docker          |          |         |                |                     |                     |
	|         | --driver=none                       |          |         |                |                     |                     |
	|         | --bootstrapper=kubeadm              |          |         |                |                     |                     |
	|---------|-------------------------------------|----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:46:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:46:00.748134  778665 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:46:00.748313  778665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:00.748332  778665 out.go:304] Setting ErrFile to fd 2...
	I0327 19:46:00.748338  778665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:00.748566  778665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	I0327 19:46:00.749202  778665 out.go:298] Setting JSON to true
	I0327 19:46:00.750218  778665 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12499,"bootTime":1711556262,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:46:00.750356  778665 start.go:139] virtualization: kvm guest
	I0327 19:46:00.753127  778665 out.go:97] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 19:46:00.753284  778665 notify.go:220] Checking for updates...
	W0327 19:46:00.753326  778665 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:46:00.755071  778665 out.go:169] MINIKUBE_LOCATION=17735
	I0327 19:46:00.756691  778665 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:46:00.758316  778665 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:46:00.759655  778665 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:46:00.760996  778665 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:43581 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (49.05s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (43.29829132s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (5.751081351s)
--- PASS: TestOffline (49.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (64.408604ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (67.146155ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (69.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m9.287828402s)
--- PASS: TestAddons/Setup (69.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vpxgx" [43c5a10d-8c55-4d63-935b-1aaa886a793f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004879829s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.476499791s)
--- PASS: TestAddons/parallel/InspektorGadget (11.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.749318ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-99lnl" [6d4266fb-20c3-437e-b8c3-33bc953b1539] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00481067s
addons_test.go:415: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.21s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.459324ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7gsf8" [c4e50e3b-2e4a-4dee-aa77-fcb4e8acd261] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004747893s
addons_test.go:473: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.889583546s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.275397ms
addons_test.go:564: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bbe14f2c-0467-431f-a7cb-2460d32e5e23] Pending
helpers_test.go:344: "task-pv-pod" [bbe14f2c-0467-431f-a7cb-2460d32e5e23] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bbe14f2c-0467-431f-a7cb-2460d32e5e23] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004010244s
addons_test.go:584: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0e797a9-1607-49e1-a2d2-345603f469c9] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0e797a9-1607-49e1-a2d2-345603f469c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0e797a9-1607-49e1-a2d2-345603f469c9] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00443224s
addons_test.go:626: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.338765395s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-r2ldg" [cb13d8a9-6def-4d22-b16c-50edcac59ac6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-r2ldg" [cb13d8a9-6def-4d22-b16c-50edcac59ac6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004111399s
--- PASS: TestAddons/parallel/Headlamp (10.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-j5qwr" [700cec43-a6ff-456f-af0b-bc2aeb8fef19] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004158054s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dvfbr" [3a81a4f4-da07-4e16-bad5-9c7c5139b5ab] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00473187s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-bsvfh" [d0844bb6-56ac-4bbb-bfca-0d50304b4462] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004009274s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.383544435s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.77s)

                                                
                                    
x
+
TestCertExpiration (235.71s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (15.332992807s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (33.913073442s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (6.462247518s)
--- PASS: TestCertExpiration (235.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17735-771440/.minikube/files/etc/test/nested/copy/778327/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (31.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (31.268642849s)
--- PASS: TestFunctional/serial/StartWithProxy (31.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (29.029614212s)
functional_test.go:659: soft start took 29.03020362s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.568231117s)
functional_test.go:757: restart took 35.568391085s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd957140727/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (178.862123ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://10.128.15.240:30398 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (58.131055ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (57.746169ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/03/27 19:59:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 821875: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (99.931782ms)

                                                
                                                
-- stdout --
	* minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:59:29.727829  822238 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:59:29.727940  822238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:59:29.727945  822238 out.go:304] Setting ErrFile to fd 2...
	I0327 19:59:29.727949  822238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:59:29.728194  822238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	I0327 19:59:29.728768  822238 out.go:298] Setting JSON to false
	I0327 19:59:29.729887  822238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13308,"bootTime":1711556262,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:59:29.729977  822238 start.go:139] virtualization: kvm guest
	I0327 19:59:29.732393  822238 out.go:177] * minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	W0327 19:59:29.733986  822238 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:59:29.735567  822238 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 19:59:29.734101  822238 notify.go:220] Checking for updates...
	I0327 19:59:29.738429  822238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:59:29.739858  822238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:59:29.741136  822238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:59:29.742441  822238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 19:59:29.743779  822238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:59:29.745757  822238 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 19:59:29.746224  822238 exec_runner.go:51] Run: systemctl --version
	I0327 19:59:29.748932  822238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:59:29.761391  822238 out.go:177] * Using the none driver based on existing profile
	I0327 19:59:29.762886  822238 start.go:297] selected driver: none
	I0327 19:59:29.762906  822238 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:59:29.763046  822238 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:59:29.763076  822238 start.go:1733] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0327 19:59:29.763393  822238 out.go:239] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0327 19:59:29.765874  822238 out.go:177] 
	W0327 19:59:29.767294  822238 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 19:59:29.768628  822238 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (102.34102ms)

                                                
                                                
-- stdout --
	* minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:59:29.929497  822275 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:59:29.929649  822275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:59:29.929660  822275 out.go:304] Setting ErrFile to fd 2...
	I0327 19:59:29.929664  822275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:59:29.930006  822275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
	I0327 19:59:29.930610  822275 out.go:298] Setting JSON to false
	I0327 19:59:29.931639  822275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13308,"bootTime":1711556262,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 19:59:29.931720  822275 start.go:139] virtualization: kvm guest
	I0327 19:59:29.933936  822275 out.go:177] * minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0327 19:59:29.935582  822275 out.go:177]   - MINIKUBE_LOCATION=17735
	I0327 19:59:29.937060  822275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0327 19:59:29.935482  822275 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 19:59:29.935546  822275 notify.go:220] Checking for updates...
	I0327 19:59:29.938596  822275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	I0327 19:59:29.940261  822275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	I0327 19:59:29.941786  822275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 19:59:29.943432  822275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:59:29.945515  822275 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 19:59:29.945962  822275 exec_runner.go:51] Run: systemctl --version
	I0327 19:59:29.948403  822275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:59:29.960175  822275 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0327 19:59:29.961712  822275 start.go:297] selected driver: none
	I0327 19:59:29.961735  822275 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:59:29.961938  822275 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:59:29.961978  822275 start.go:1733] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0327 19:59:29.962416  822275 out.go:239] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0327 19:59:29.965024  822275 out.go:177] 
	W0327 19:59:29.966581  822275 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 19:59:29.968144  822275 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "199.080501ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "61.261586ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "193.787123ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.02765ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nlpkf" [13dbea19-8281-4e28-a351-e7627c059cac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nlpkf" [13dbea19-8281-4e28-a351-e7627c059cac] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003840207s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1490: Took "349.663664ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://10.128.15.240:31545
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://10.128.15.240:31545
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zlfj4" [92e51798-0789-4ab1-8bbd-72afff350dbf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zlfj4" [92e51798-0789-4ab1-8bbd-72afff350dbf] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003436488s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://10.128.15.240:31997
functional_test.go:1671: http://10.128.15.240:31997: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zlfj4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.128.15.240:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.128.15.240:31997
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2ca0fed9-f3cf-43f1-ba20-9a40ed653c3c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004324422s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6f400c4d-2103-42a8-9e5f-a07eaa3f6fbf] Pending
helpers_test.go:344: "sp-pod" [6f400c4d-2103-42a8-9e5f-a07eaa3f6fbf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6f400c4d-2103-42a8-9e5f-a07eaa3f6fbf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003934387s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23868952-5a73-48e1-af39-b23be37d05e8] Pending
helpers_test.go:344: "sp-pod" [23868952-5a73-48e1-af39-b23be37d05e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23868952-5a73-48e1-af39-b23be37d05e8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004150866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5jlwk" [f24b9aa2-f8f3-4a47-85bd-3d81d2de5897] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5jlwk" [f24b9aa2-f8f3-4a47-85bd-3d81d2de5897] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003770239s
functional_test.go:1803: (dbg) Run:  kubectl --context minikube exec mysql-859648c796-5jlwk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context minikube exec mysql-859648c796-5jlwk -- mysql -ppassword -e "show databases;": exit status 1 (162.944311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context minikube exec mysql-859648c796-5jlwk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context minikube exec mysql-859648c796-5jlwk -- mysql -ppassword -e "show databases;": exit status 1 (116.825512ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context minikube exec mysql-859648c796-5jlwk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.347396111s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.542328724s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:minikube
--- PASS: TestFunctional/delete_addon-resizer_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.443150009s)
--- PASS: TestImageBuild/serial/Setup (14.44s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.045276501s)
--- PASS: TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (43.187266624s)
--- PASS: TestJSONOutput/start/Command (43.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.400335094s)
--- PASS: TestJSONOutput/stop/Command (10.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.12432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54235d72-56bd-45da-bf08-d7e4f559c2cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c62fa44-2aab-4814-bd23-0c0d0a3d2191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17735"}}
	{"specversion":"1.0","id":"fae503bd-7fb5-4dc1-a70e-9e9688c6e084","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7b63ee6-3d9d-485b-9c37-ed97fbb8edc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig"}}
	{"specversion":"1.0","id":"21e1ff5b-d62b-4995-85e6-adaa4b0fb85f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube"}}
	{"specversion":"1.0","id":"0f7ed0fe-737e-4830-ac6b-4b895d20b014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e51e9990-b0a3-4c5d-9111-dc36a494bc5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7dce898c-3823-4b08-bdd8-20c263fbcc33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (39.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.171232501s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.638495459s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (5.751008848s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (39.28s)

                                                
                                    
x
+
TestPause/serial/Start (30.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
E0327 20:03:03.536607  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.542418  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.552747  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.573102  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.613453  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.693865  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:03.854314  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
E0327 20:03:04.174888  778327 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (30.115725229s)
--- PASS: TestPause/serial/Start (30.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (31.398856164s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.40s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (147.158524ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.15s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (6.142462392s)
--- PASS: TestPause/serial/DeletePaused (6.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.08s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2324663213 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2324663213 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (28.795968142s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (36.217404427s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.495681271s)
--- PASS: TestRunningBinaryUpgrade (68.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.971186586 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.971186586 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.233438666s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.971186586 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.971186586 -p minikube stop: (23.740719552s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.02692642s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (319.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.596985488s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.336253967s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (93.635454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m26.660503923s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (85.676782ms)

                                                
                                                
-- stdout --
	* minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17735
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.717092582s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (5.594148281s)
--- PASS: TestKubernetesUpgrade (319.14s)

                                                
                                    

Test skip (64/174)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.29.3/preload-exists 0
14 TestDownloadOnly/v1.29.3/cached-images 0
16 TestDownloadOnly/v1.29.3/kubectl 0
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestAddons/parallel/Ingress 0
43 TestAddons/parallel/Olm 0
47 TestAddons/parallel/LocalPath 0
54 TestCertOptions 0
56 TestDockerFlags 0
57 TestForceSystemdFlag 0
58 TestForceSystemdEnv 0
59 TestDockerEnvContainerd 0
60 TestKVMDriverInstallOrUpdate 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
63 TestErrorSpam 0
72 TestFunctional/serial/CacheCmd 0
86 TestFunctional/parallel/MountCmd 0
103 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
105 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
106 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
107 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
108 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
109 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
111 TestFunctional/parallel/SSHCmd 0
112 TestFunctional/parallel/CpCmd 0
114 TestFunctional/parallel/FileSync 0
115 TestFunctional/parallel/CertSync 0
120 TestFunctional/parallel/DockerEnv 0
121 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/ImageCommands 0
124 TestFunctional/parallel/NonActiveRuntimeDisabled 0
132 TestGvisorAddon 0
133 TestMultiControlPlane 0
141 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
168 TestKicCustomNetwork 0
169 TestKicExistingNetwork 0
170 TestKicCustomSubnet 0
171 TestKicStaticIP 0
174 TestMountStart 0
175 TestMultiNode 0
176 TestNetworkPlugins 0
177 TestNoKubernetes 0
178 TestChangeNoneUser 0
189 TestPreload 0
190 TestScheduledStopWindows 0
191 TestScheduledStopUnix 0
192 TestSkaffold 0
195 TestStartStop/group/old-k8s-version 0.14
196 TestStartStop/group/newest-cni 0.15
197 TestStartStop/group/default-k8s-diff-port 0.14
198 TestStartStop/group/no-preload 0.14
199 TestStartStop/group/disable-driver-mounts 0.14
200 TestStartStop/group/embed-certs 0.14
201 TestInsufficientStorage 0
208 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:196: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:869: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1037: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1713: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1756: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1920: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1951: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:454: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:541: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:291: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2012: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard