Test Report: KVM_Linux_containerd 18588

                    
                      801f50a102c40cfdc9fc79f6fcbe1cefa0ef9ea3:2024-04-08:33935
                    
                

Test fail (1/333)

Order failed test Duration
42 TestAddons/parallel/HelmTiller 14.43
x
+
TestAddons/parallel/HelmTiller (14.43s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.277781ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-8rp29" [339d3599-80be-4668-bd18-039f66262f77] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005120915s
addons_test.go:473: (dbg) Run:  kubectl --context addons-400631 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-400631 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.989199763s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-400631 addons disable helm-tiller --alsologtostderr -v=1: exit status 11 (378.681952ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:15:19.383764  364087 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:15:19.383894  364087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:15:19.383903  364087 out.go:304] Setting ErrFile to fd 2...
	I0408 11:15:19.383908  364087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:15:19.384109  364087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:15:19.384378  364087 mustload.go:65] Loading cluster: addons-400631
	I0408 11:15:19.384740  364087 config.go:182] Loaded profile config "addons-400631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:15:19.384766  364087 addons.go:597] checking whether the cluster is paused
	I0408 11:15:19.384866  364087 config.go:182] Loaded profile config "addons-400631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:15:19.384885  364087 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:15:19.385298  364087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:15:19.385357  364087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:15:19.400005  364087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32945
	I0408 11:15:19.400544  364087 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:15:19.401146  364087 main.go:141] libmachine: Using API Version  1
	I0408 11:15:19.401179  364087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:15:19.401579  364087 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:15:19.401803  364087 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:15:19.403551  364087 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:15:19.403785  364087 ssh_runner.go:195] Run: systemctl --version
	I0408 11:15:19.403809  364087 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:15:19.406242  364087 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:15:19.406631  364087 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:15:19.406680  364087 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:15:19.406807  364087 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:15:19.406998  364087 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:15:19.407158  364087 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:15:19.407320  364087 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:15:19.518529  364087 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 11:15:19.518603  364087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:15:19.603889  364087 cri.go:89] found id: "fd632210c280dab91cc1c29e6eb17d26c1c935c8c798ae1c12501d855c7e498a"
	I0408 11:15:19.603919  364087 cri.go:89] found id: "e8b4b0d7768a568ff9e58976740527200d9affbb226d401c1037783472c07552"
	I0408 11:15:19.603925  364087 cri.go:89] found id: "01c04687464e88f73b07bae5333a7937162d8dfc4aee73e862c8ec00cf4f7a54"
	I0408 11:15:19.603930  364087 cri.go:89] found id: "9a1cc9218d48a0fe82ba939e32211e41b7978da2992b80d72fd17ff397557a59"
	I0408 11:15:19.603933  364087 cri.go:89] found id: "7b978c7e4f37fc95ea50353caa5df51231f9a3ae209b66787b63eae1ae4847e9"
	I0408 11:15:19.603942  364087 cri.go:89] found id: "0abd7135e3f0da7da10842bc6eaa7fa3a998d9644b76c5333330a22a24eb0d6a"
	I0408 11:15:19.603946  364087 cri.go:89] found id: "f742cc6b7bd01541bd7d4f05763f8ad39cf1a61afda8e4e311837fdcae791958"
	I0408 11:15:19.603950  364087 cri.go:89] found id: "cceb17d102723a8ffe5a674661c88da7cc18a4d9f345c3db08c117b477905048"
	I0408 11:15:19.603954  364087 cri.go:89] found id: "f66c474b725c3af16f1aea67adfec93aa9529e0f83c994245cd4089c306a403e"
	I0408 11:15:19.603962  364087 cri.go:89] found id: "60d1fd7f32c635cf250e04de8ee5496527f70ea321c25b722aed4e441559ff66"
	I0408 11:15:19.603966  364087 cri.go:89] found id: "4a0210f42ce0f1fba5b35a4cc0adac9fb94df4b2d271ccc3834176ce316e7776"
	I0408 11:15:19.603970  364087 cri.go:89] found id: "1eb6354d9f73b52d807e863c261f8c2e0b47cd751049a9aa2092ffd988ca3d0d"
	I0408 11:15:19.603974  364087 cri.go:89] found id: "bba55d186d09a02996167f3434811e8bdc6628796c535fd45c13b08a20671c2b"
	I0408 11:15:19.603978  364087 cri.go:89] found id: "6347efe432fe220d4c5fae7a9ad6f3a10ec1ba40e9f1967d77330935719e2a88"
	I0408 11:15:19.603984  364087 cri.go:89] found id: "4bfbede92a0e0d9b0784ea7a8679888e7eee4785a01325b3e92094860f694cb9"
	I0408 11:15:19.603992  364087 cri.go:89] found id: "57ad7c7e0a07e7b251281a725e674bd9dd0e8a3852418dfcfc9d0c45ce7783c1"
	I0408 11:15:19.603996  364087 cri.go:89] found id: "bcea5aba35a74a30c275dbf166ff8ba33053116efc71cdfe5a1bb456a85801c4"
	I0408 11:15:19.604004  364087 cri.go:89] found id: "aafd1302df06928bdb2af2c7a7ab3db8a6df506b870ea34048abcb3179572435"
	I0408 11:15:19.604009  364087 cri.go:89] found id: "efcf6cf768615635a6c88fdeb6ef8e95c0a6ed3a4383d2f698f574475d498144"
	I0408 11:15:19.604013  364087 cri.go:89] found id: "54e7d6328605ddf78548c049c922b274e9a270687642154da30eb71f6cc37e64"
	I0408 11:15:19.604020  364087 cri.go:89] found id: "bc27a68545798606b3509b6cef8e7660ca38b3c30c470898e4bfaecd5b3d3e87"
	I0408 11:15:19.604024  364087 cri.go:89] found id: "bce1890f11d6deb61adca61032e95093a381e051d22709362c9e13bd2aa5223e"
	I0408 11:15:19.604031  364087 cri.go:89] found id: "b31175f1ddb4319a039c02f8e4eeae38ce230b88eb7fd99c244077d72f2520b5"
	I0408 11:15:19.604036  364087 cri.go:89] found id: "f7482b68936ee84476cae89c3eafd497340c3d903107c10d638b5ab138447332"
	I0408 11:15:19.604042  364087 cri.go:89] found id: ""
	I0408 11:15:19.604101  364087 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0408 11:15:19.695314  364087 main.go:141] libmachine: Making call to close driver server
	I0408 11:15:19.695342  364087 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:15:19.695671  364087 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:15:19.695689  364087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:15:19.697819  364087 out.go:177] 
	W0408 11:15:19.699067  364087 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T11:15:19Z" level=error msg="stat /run/containerd/runc/k8s.io/edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T11:15:19Z" level=error msg="stat /run/containerd/runc/k8s.io/edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8: no such file or directory"
	
	W0408 11:15:19.699093  364087 out.go:239] * 
	* 
	W0408 11:15:19.702460  364087 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:15:19.703788  364087 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:492: failed disabling helm-tiller addon. arg "out/minikube-linux-amd64 -p addons-400631 addons disable helm-tiller --alsologtostderr -v=1".s exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-400631 -n addons-400631
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-400631 logs -n 25: (2.114540347s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-480610                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| delete  | -p download-only-480610                                                                     | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| start   | -o=json --download-only                                                                     | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-867752                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| delete  | -p download-only-867752                                                                     | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| start   | -o=json --download-only                                                                     | download-only-025600 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-025600                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| delete  | -p download-only-025600                                                                     | download-only-025600 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| delete  | -p download-only-480610                                                                     | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| delete  | -p download-only-867752                                                                     | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| delete  | -p download-only-025600                                                                     | download-only-025600 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-192648 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC |                     |
	|         | binary-mirror-192648                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:33245                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-192648                                                                     | binary-mirror-192648 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:12 UTC |
	| addons  | disable dashboard -p                                                                        | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC |                     |
	|         | addons-400631                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC |                     |
	|         | addons-400631                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-400631 --wait=true                                                                | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:12 UTC | 08 Apr 24 11:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-400631 ssh cat                                                                       | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:15 UTC | 08 Apr 24 11:15 UTC |
	|         | /opt/local-path-provisioner/pvc-dbb427bd-2a28-4821-82f4-4fe92ed51900_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-400631 addons disable                                                                | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:15 UTC | 08 Apr 24 11:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-400631 addons disable                                                                | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:15 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-400631 addons                                                                        | addons-400631        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:15 UTC |                     |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:12:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:12:39.698464  362928 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:12:39.698651  362928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:12:39.698659  362928 out.go:304] Setting ErrFile to fd 2...
	I0408 11:12:39.698666  362928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:12:39.699427  362928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:12:39.700092  362928 out.go:298] Setting JSON to false
	I0408 11:12:39.701015  362928 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3303,"bootTime":1712571457,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:12:39.701082  362928 start.go:139] virtualization: kvm guest
	I0408 11:12:39.702997  362928 out.go:177] * [addons-400631] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:12:39.704376  362928 notify.go:220] Checking for updates...
	I0408 11:12:39.704387  362928 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:12:39.705655  362928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:12:39.706920  362928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:12:39.708086  362928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:12:39.709154  362928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:12:39.710295  362928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:12:39.711504  362928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:12:39.742784  362928 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 11:12:39.743922  362928 start.go:297] selected driver: kvm2
	I0408 11:12:39.743934  362928 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:12:39.743943  362928 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:12:39.744615  362928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:12:39.744697  362928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-354699/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:12:39.759006  362928 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:12:39.759095  362928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:12:39.759318  362928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:12:39.759386  362928 cni.go:84] Creating CNI manager for ""
	I0408 11:12:39.759402  362928 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:12:39.759413  362928 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:12:39.759476  362928 start.go:340] cluster config:
	{Name:addons-400631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-400631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:12:39.759590  362928 iso.go:125] acquiring lock: {Name:mk9795a25e82a211f5efea96f359ae93d962e2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:12:39.761199  362928 out.go:177] * Starting "addons-400631" primary control-plane node in "addons-400631" cluster
	I0408 11:12:39.762501  362928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 11:12:39.762528  362928 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0408 11:12:39.762535  362928 cache.go:56] Caching tarball of preloaded images
	I0408 11:12:39.762613  362928 preload.go:173] Found /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 11:12:39.762624  362928 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0408 11:12:39.762949  362928 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/config.json ...
	I0408 11:12:39.762975  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/config.json: {Name:mk7d83c030cbd3d42250a208417aa057b67c696b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:12:39.763141  362928 start.go:360] acquireMachinesLock for addons-400631: {Name:mkaca08e3229d5c293cd7c1de8c4cda90b4edc07 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:12:39.763190  362928 start.go:364] duration metric: took 33.794µs to acquireMachinesLock for "addons-400631"
	I0408 11:12:39.763206  362928 start.go:93] Provisioning new machine with config: &{Name:addons-400631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-400631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 11:12:39.763268  362928 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 11:12:39.764730  362928 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 11:12:39.764860  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:12:39.764897  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:12:39.778657  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0408 11:12:39.779132  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:12:39.779642  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:12:39.779665  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:12:39.780021  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:12:39.780231  362928 main.go:141] libmachine: (addons-400631) Calling .GetMachineName
	I0408 11:12:39.780356  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:12:39.780515  362928 start.go:159] libmachine.API.Create for "addons-400631" (driver="kvm2")
	I0408 11:12:39.780538  362928 client.go:168] LocalClient.Create starting
	I0408 11:12:39.780580  362928 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem
	I0408 11:12:39.895787  362928 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/cert.pem
	I0408 11:12:40.115563  362928 main.go:141] libmachine: Running pre-create checks...
	I0408 11:12:40.115589  362928 main.go:141] libmachine: (addons-400631) Calling .PreCreateCheck
	I0408 11:12:40.116098  362928 main.go:141] libmachine: (addons-400631) Calling .GetConfigRaw
	I0408 11:12:40.116526  362928 main.go:141] libmachine: Creating machine...
	I0408 11:12:40.116542  362928 main.go:141] libmachine: (addons-400631) Calling .Create
	I0408 11:12:40.116657  362928 main.go:141] libmachine: (addons-400631) Creating KVM machine...
	I0408 11:12:40.117903  362928 main.go:141] libmachine: (addons-400631) DBG | found existing default KVM network
	I0408 11:12:40.118640  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:40.118502  362950 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0408 11:12:40.118682  362928 main.go:141] libmachine: (addons-400631) DBG | created network xml: 
	I0408 11:12:40.118704  362928 main.go:141] libmachine: (addons-400631) DBG | <network>
	I0408 11:12:40.118743  362928 main.go:141] libmachine: (addons-400631) DBG |   <name>mk-addons-400631</name>
	I0408 11:12:40.118771  362928 main.go:141] libmachine: (addons-400631) DBG |   <dns enable='no'/>
	I0408 11:12:40.118780  362928 main.go:141] libmachine: (addons-400631) DBG |   
	I0408 11:12:40.118790  362928 main.go:141] libmachine: (addons-400631) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 11:12:40.118808  362928 main.go:141] libmachine: (addons-400631) DBG |     <dhcp>
	I0408 11:12:40.118814  362928 main.go:141] libmachine: (addons-400631) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 11:12:40.118823  362928 main.go:141] libmachine: (addons-400631) DBG |     </dhcp>
	I0408 11:12:40.118828  362928 main.go:141] libmachine: (addons-400631) DBG |   </ip>
	I0408 11:12:40.118854  362928 main.go:141] libmachine: (addons-400631) DBG |   
	I0408 11:12:40.118861  362928 main.go:141] libmachine: (addons-400631) DBG | </network>
	I0408 11:12:40.118875  362928 main.go:141] libmachine: (addons-400631) DBG | 
	I0408 11:12:40.124088  362928 main.go:141] libmachine: (addons-400631) DBG | trying to create private KVM network mk-addons-400631 192.168.39.0/24...
	I0408 11:12:40.185814  362928 main.go:141] libmachine: (addons-400631) DBG | private KVM network mk-addons-400631 192.168.39.0/24 created
	I0408 11:12:40.185845  362928 main.go:141] libmachine: (addons-400631) Setting up store path in /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631 ...
	I0408 11:12:40.185859  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:40.185737  362950 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:12:40.185870  362928 main.go:141] libmachine: (addons-400631) Building disk image from file:///home/jenkins/minikube-integration/18588-354699/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:12:40.185911  362928 main.go:141] libmachine: (addons-400631) Downloading /home/jenkins/minikube-integration/18588-354699/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-354699/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:12:40.447391  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:40.447269  362950 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa...
	I0408 11:12:40.580285  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:40.580127  362950 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/addons-400631.rawdisk...
	I0408 11:12:40.580314  362928 main.go:141] libmachine: (addons-400631) DBG | Writing magic tar header
	I0408 11:12:40.580325  362928 main.go:141] libmachine: (addons-400631) DBG | Writing SSH key tar header
	I0408 11:12:40.580333  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:40.580255  362950 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631 ...
	I0408 11:12:40.580346  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631
	I0408 11:12:40.580430  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631 (perms=drwx------)
	I0408 11:12:40.580461  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins/minikube-integration/18588-354699/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:12:40.580476  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-354699/.minikube/machines
	I0408 11:12:40.580498  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:12:40.580512  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-354699
	I0408 11:12:40.580527  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins/minikube-integration/18588-354699/.minikube (perms=drwxr-xr-x)
	I0408 11:12:40.580536  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:12:40.580545  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins/minikube-integration/18588-354699 (perms=drwxrwxr-x)
	I0408 11:12:40.580561  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:12:40.580573  362928 main.go:141] libmachine: (addons-400631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:12:40.580584  362928 main.go:141] libmachine: (addons-400631) Creating domain...
	I0408 11:12:40.580602  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:12:40.580630  362928 main.go:141] libmachine: (addons-400631) DBG | Checking permissions on dir: /home
	I0408 11:12:40.580653  362928 main.go:141] libmachine: (addons-400631) DBG | Skipping /home - not owner
	I0408 11:12:40.581685  362928 main.go:141] libmachine: (addons-400631) define libvirt domain using xml: 
	I0408 11:12:40.581717  362928 main.go:141] libmachine: (addons-400631) <domain type='kvm'>
	I0408 11:12:40.581727  362928 main.go:141] libmachine: (addons-400631)   <name>addons-400631</name>
	I0408 11:12:40.581750  362928 main.go:141] libmachine: (addons-400631)   <memory unit='MiB'>4000</memory>
	I0408 11:12:40.581764  362928 main.go:141] libmachine: (addons-400631)   <vcpu>2</vcpu>
	I0408 11:12:40.581780  362928 main.go:141] libmachine: (addons-400631)   <features>
	I0408 11:12:40.581800  362928 main.go:141] libmachine: (addons-400631)     <acpi/>
	I0408 11:12:40.581812  362928 main.go:141] libmachine: (addons-400631)     <apic/>
	I0408 11:12:40.581818  362928 main.go:141] libmachine: (addons-400631)     <pae/>
	I0408 11:12:40.581822  362928 main.go:141] libmachine: (addons-400631)     
	I0408 11:12:40.581827  362928 main.go:141] libmachine: (addons-400631)   </features>
	I0408 11:12:40.581832  362928 main.go:141] libmachine: (addons-400631)   <cpu mode='host-passthrough'>
	I0408 11:12:40.581836  362928 main.go:141] libmachine: (addons-400631)   
	I0408 11:12:40.581856  362928 main.go:141] libmachine: (addons-400631)   </cpu>
	I0408 11:12:40.581865  362928 main.go:141] libmachine: (addons-400631)   <os>
	I0408 11:12:40.581870  362928 main.go:141] libmachine: (addons-400631)     <type>hvm</type>
	I0408 11:12:40.581875  362928 main.go:141] libmachine: (addons-400631)     <boot dev='cdrom'/>
	I0408 11:12:40.581882  362928 main.go:141] libmachine: (addons-400631)     <boot dev='hd'/>
	I0408 11:12:40.581887  362928 main.go:141] libmachine: (addons-400631)     <bootmenu enable='no'/>
	I0408 11:12:40.581894  362928 main.go:141] libmachine: (addons-400631)   </os>
	I0408 11:12:40.581900  362928 main.go:141] libmachine: (addons-400631)   <devices>
	I0408 11:12:40.581908  362928 main.go:141] libmachine: (addons-400631)     <disk type='file' device='cdrom'>
	I0408 11:12:40.581917  362928 main.go:141] libmachine: (addons-400631)       <source file='/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/boot2docker.iso'/>
	I0408 11:12:40.581925  362928 main.go:141] libmachine: (addons-400631)       <target dev='hdc' bus='scsi'/>
	I0408 11:12:40.581955  362928 main.go:141] libmachine: (addons-400631)       <readonly/>
	I0408 11:12:40.581985  362928 main.go:141] libmachine: (addons-400631)     </disk>
	I0408 11:12:40.582004  362928 main.go:141] libmachine: (addons-400631)     <disk type='file' device='disk'>
	I0408 11:12:40.582021  362928 main.go:141] libmachine: (addons-400631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:12:40.582037  362928 main.go:141] libmachine: (addons-400631)       <source file='/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/addons-400631.rawdisk'/>
	I0408 11:12:40.582050  362928 main.go:141] libmachine: (addons-400631)       <target dev='hda' bus='virtio'/>
	I0408 11:12:40.582064  362928 main.go:141] libmachine: (addons-400631)     </disk>
	I0408 11:12:40.582077  362928 main.go:141] libmachine: (addons-400631)     <interface type='network'>
	I0408 11:12:40.582090  362928 main.go:141] libmachine: (addons-400631)       <source network='mk-addons-400631'/>
	I0408 11:12:40.582100  362928 main.go:141] libmachine: (addons-400631)       <model type='virtio'/>
	I0408 11:12:40.582109  362928 main.go:141] libmachine: (addons-400631)     </interface>
	I0408 11:12:40.582116  362928 main.go:141] libmachine: (addons-400631)     <interface type='network'>
	I0408 11:12:40.582125  362928 main.go:141] libmachine: (addons-400631)       <source network='default'/>
	I0408 11:12:40.582132  362928 main.go:141] libmachine: (addons-400631)       <model type='virtio'/>
	I0408 11:12:40.582145  362928 main.go:141] libmachine: (addons-400631)     </interface>
	I0408 11:12:40.582151  362928 main.go:141] libmachine: (addons-400631)     <serial type='pty'>
	I0408 11:12:40.582156  362928 main.go:141] libmachine: (addons-400631)       <target port='0'/>
	I0408 11:12:40.582160  362928 main.go:141] libmachine: (addons-400631)     </serial>
	I0408 11:12:40.582169  362928 main.go:141] libmachine: (addons-400631)     <console type='pty'>
	I0408 11:12:40.582174  362928 main.go:141] libmachine: (addons-400631)       <target type='serial' port='0'/>
	I0408 11:12:40.582178  362928 main.go:141] libmachine: (addons-400631)     </console>
	I0408 11:12:40.582187  362928 main.go:141] libmachine: (addons-400631)     <rng model='virtio'>
	I0408 11:12:40.582199  362928 main.go:141] libmachine: (addons-400631)       <backend model='random'>/dev/random</backend>
	I0408 11:12:40.582207  362928 main.go:141] libmachine: (addons-400631)     </rng>
	I0408 11:12:40.582212  362928 main.go:141] libmachine: (addons-400631)     
	I0408 11:12:40.582216  362928 main.go:141] libmachine: (addons-400631)     
	I0408 11:12:40.582220  362928 main.go:141] libmachine: (addons-400631)   </devices>
	I0408 11:12:40.582225  362928 main.go:141] libmachine: (addons-400631) </domain>
	I0408 11:12:40.582232  362928 main.go:141] libmachine: (addons-400631) 
	I0408 11:12:40.588082  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:e6:58:55 in network default
	I0408 11:12:40.588649  362928 main.go:141] libmachine: (addons-400631) Ensuring networks are active...
	I0408 11:12:40.588669  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:40.589565  362928 main.go:141] libmachine: (addons-400631) Ensuring network default is active
	I0408 11:12:40.589944  362928 main.go:141] libmachine: (addons-400631) Ensuring network mk-addons-400631 is active
	I0408 11:12:40.590590  362928 main.go:141] libmachine: (addons-400631) Getting domain xml...
	I0408 11:12:40.591442  362928 main.go:141] libmachine: (addons-400631) Creating domain...
	I0408 11:12:41.823902  362928 main.go:141] libmachine: (addons-400631) Waiting to get IP...
	I0408 11:12:41.824530  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:41.825025  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:41.825060  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:41.825011  362950 retry.go:31] will retry after 215.743928ms: waiting for machine to come up
	I0408 11:12:42.042622  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:42.042983  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:42.043009  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:42.042957  362950 retry.go:31] will retry after 384.473671ms: waiting for machine to come up
	I0408 11:12:42.428623  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:42.429054  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:42.429078  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:42.429021  362950 retry.go:31] will retry after 414.532127ms: waiting for machine to come up
	I0408 11:12:42.845700  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:42.846145  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:42.846178  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:42.846092  362950 retry.go:31] will retry after 368.406859ms: waiting for machine to come up
	I0408 11:12:43.215737  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:43.216267  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:43.216300  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:43.216225  362950 retry.go:31] will retry after 551.868215ms: waiting for machine to come up
	I0408 11:12:43.769959  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:43.770368  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:43.770387  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:43.770319  362950 retry.go:31] will retry after 783.815033ms: waiting for machine to come up
	I0408 11:12:44.555299  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:44.555796  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:44.555902  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:44.555743  362950 retry.go:31] will retry after 964.444462ms: waiting for machine to come up
	I0408 11:12:45.522780  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:45.523285  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:45.523316  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:45.523240  362950 retry.go:31] will retry after 1.033262627s: waiting for machine to come up
	I0408 11:12:46.557677  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:46.558196  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:46.558222  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:46.558140  362950 retry.go:31] will retry after 1.415193488s: waiting for machine to come up
	I0408 11:12:47.975599  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:47.976080  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:47.976116  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:47.976040  362950 retry.go:31] will retry after 2.227998743s: waiting for machine to come up
	I0408 11:12:50.205104  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:50.205588  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:50.205614  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:50.205553  362950 retry.go:31] will retry after 1.955201507s: waiting for machine to come up
	I0408 11:12:52.163656  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:52.163999  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:52.164024  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:52.163966  362950 retry.go:31] will retry after 3.613190875s: waiting for machine to come up
	I0408 11:12:55.779205  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:55.779762  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:55.779797  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:55.779692  362950 retry.go:31] will retry after 3.344436699s: waiting for machine to come up
	I0408 11:12:59.128073  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:12:59.128447  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find current IP address of domain addons-400631 in network mk-addons-400631
	I0408 11:12:59.128471  362928 main.go:141] libmachine: (addons-400631) DBG | I0408 11:12:59.128386  362950 retry.go:31] will retry after 5.289392028s: waiting for machine to come up
	I0408 11:13:04.422619  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.423019  362928 main.go:141] libmachine: (addons-400631) Found IP for machine: 192.168.39.102
	I0408 11:13:04.423046  362928 main.go:141] libmachine: (addons-400631) Reserving static IP address...
	I0408 11:13:04.423062  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has current primary IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.423447  362928 main.go:141] libmachine: (addons-400631) DBG | unable to find host DHCP lease matching {name: "addons-400631", mac: "52:54:00:ec:31:7f", ip: "192.168.39.102"} in network mk-addons-400631
	I0408 11:13:04.493809  362928 main.go:141] libmachine: (addons-400631) DBG | Getting to WaitForSSH function...
	I0408 11:13:04.493852  362928 main.go:141] libmachine: (addons-400631) Reserved static IP address: 192.168.39.102
	I0408 11:13:04.493901  362928 main.go:141] libmachine: (addons-400631) Waiting for SSH to be available...
	I0408 11:13:04.496754  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.497216  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:04.497248  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.497366  362928 main.go:141] libmachine: (addons-400631) DBG | Using SSH client type: external
	I0408 11:13:04.497392  362928 main.go:141] libmachine: (addons-400631) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa (-rw-------)
	I0408 11:13:04.497462  362928 main.go:141] libmachine: (addons-400631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:13:04.497488  362928 main.go:141] libmachine: (addons-400631) DBG | About to run SSH command:
	I0408 11:13:04.497507  362928 main.go:141] libmachine: (addons-400631) DBG | exit 0
	I0408 11:13:04.623011  362928 main.go:141] libmachine: (addons-400631) DBG | SSH cmd err, output: <nil>: 
	I0408 11:13:04.623302  362928 main.go:141] libmachine: (addons-400631) KVM machine creation complete!
	I0408 11:13:04.623570  362928 main.go:141] libmachine: (addons-400631) Calling .GetConfigRaw
	I0408 11:13:04.624204  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:04.624430  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:04.624621  362928 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:13:04.624638  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:04.625888  362928 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:13:04.625905  362928 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:13:04.625913  362928 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:13:04.625928  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:04.628336  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.628626  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:04.628678  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.628866  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:04.629096  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.629290  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.629426  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:04.629568  362928 main.go:141] libmachine: Using SSH client type: native
	I0408 11:13:04.629794  362928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0408 11:13:04.629807  362928 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:13:04.742594  362928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:13:04.742614  362928 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:13:04.742623  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:04.745239  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.745551  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:04.745576  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.745734  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:04.745962  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.746170  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.746326  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:04.746492  362928 main.go:141] libmachine: Using SSH client type: native
	I0408 11:13:04.746657  362928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0408 11:13:04.746669  362928 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:13:04.860069  362928 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:13:04.860181  362928 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:13:04.860201  362928 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:13:04.860211  362928 main.go:141] libmachine: (addons-400631) Calling .GetMachineName
	I0408 11:13:04.860550  362928 buildroot.go:166] provisioning hostname "addons-400631"
	I0408 11:13:04.860582  362928 main.go:141] libmachine: (addons-400631) Calling .GetMachineName
	I0408 11:13:04.860780  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:04.863424  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.863828  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:04.863851  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:04.864029  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:04.864211  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.864365  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:04.864484  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:04.864632  362928 main.go:141] libmachine: Using SSH client type: native
	I0408 11:13:04.864820  362928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0408 11:13:04.864833  362928 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-400631 && echo "addons-400631" | sudo tee /etc/hostname
	I0408 11:13:04.997068  362928 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-400631
	
	I0408 11:13:04.997094  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:04.999759  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.000059  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.000089  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.000215  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.000423  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.000611  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.000752  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.000919  362928 main.go:141] libmachine: Using SSH client type: native
	I0408 11:13:05.001103  362928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0408 11:13:05.001125  362928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-400631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-400631/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-400631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:13:05.121062  362928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:13:05.121102  362928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-354699/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-354699/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-354699/.minikube}
	I0408 11:13:05.121146  362928 buildroot.go:174] setting up certificates
	I0408 11:13:05.121160  362928 provision.go:84] configureAuth start
	I0408 11:13:05.121174  362928 main.go:141] libmachine: (addons-400631) Calling .GetMachineName
	I0408 11:13:05.121463  362928 main.go:141] libmachine: (addons-400631) Calling .GetIP
	I0408 11:13:05.123872  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.124169  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.124211  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.124289  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.126403  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.126788  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.126808  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.126963  362928 provision.go:143] copyHostCerts
	I0408 11:13:05.127038  362928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-354699/.minikube/ca.pem (1078 bytes)
	I0408 11:13:05.127194  362928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-354699/.minikube/cert.pem (1123 bytes)
	I0408 11:13:05.127296  362928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-354699/.minikube/key.pem (1679 bytes)
	I0408 11:13:05.127378  362928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-354699/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca-key.pem org=jenkins.addons-400631 san=[127.0.0.1 192.168.39.102 addons-400631 localhost minikube]
	I0408 11:13:05.212635  362928 provision.go:177] copyRemoteCerts
	I0408 11:13:05.212704  362928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:13:05.212733  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.215318  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.215642  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.215675  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.215778  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.215981  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.216124  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.216245  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:05.304762  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:13:05.331038  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 11:13:05.356256  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:13:05.380859  362928 provision.go:87] duration metric: took 259.680881ms to configureAuth
	I0408 11:13:05.380891  362928 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:13:05.381166  362928 config.go:182] Loaded profile config "addons-400631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:13:05.381200  362928 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:13:05.381221  362928 main.go:141] libmachine: (addons-400631) Calling .GetURL
	I0408 11:13:05.382486  362928 main.go:141] libmachine: (addons-400631) DBG | Using libvirt version 6000000
	I0408 11:13:05.384765  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.385157  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.385179  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.385365  362928 main.go:141] libmachine: Docker is up and running!
	I0408 11:13:05.385379  362928 main.go:141] libmachine: Reticulating splines...
	I0408 11:13:05.385386  362928 client.go:171] duration metric: took 25.604837866s to LocalClient.Create
	I0408 11:13:05.385411  362928 start.go:167] duration metric: took 25.604896906s to libmachine.API.Create "addons-400631"
	I0408 11:13:05.385422  362928 start.go:293] postStartSetup for "addons-400631" (driver="kvm2")
	I0408 11:13:05.385433  362928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:13:05.385460  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:05.385712  362928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:13:05.385735  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.388060  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.388371  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.388390  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.388525  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.388723  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.388875  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.389013  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:05.475000  362928 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:13:05.479656  362928 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:13:05.479685  362928 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-354699/.minikube/addons for local assets ...
	I0408 11:13:05.479773  362928 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-354699/.minikube/files for local assets ...
	I0408 11:13:05.479811  362928 start.go:296] duration metric: took 94.380494ms for postStartSetup
	I0408 11:13:05.479851  362928 main.go:141] libmachine: (addons-400631) Calling .GetConfigRaw
	I0408 11:13:05.480460  362928 main.go:141] libmachine: (addons-400631) Calling .GetIP
	I0408 11:13:05.482937  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.483296  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.483339  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.483532  362928 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/config.json ...
	I0408 11:13:05.483749  362928 start.go:128] duration metric: took 25.720470947s to createHost
	I0408 11:13:05.483776  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.485782  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.486114  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.486145  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.486256  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.486422  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.486543  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.486685  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.486873  362928 main.go:141] libmachine: Using SSH client type: native
	I0408 11:13:05.487072  362928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0408 11:13:05.487084  362928 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:13:05.596025  362928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712574785.567738321
	
	I0408 11:13:05.596056  362928 fix.go:216] guest clock: 1712574785.567738321
	I0408 11:13:05.596067  362928 fix.go:229] Guest: 2024-04-08 11:13:05.567738321 +0000 UTC Remote: 2024-04-08 11:13:05.483762807 +0000 UTC m=+25.832691644 (delta=83.975514ms)
	I0408 11:13:05.596098  362928 fix.go:200] guest clock delta is within tolerance: 83.975514ms
	I0408 11:13:05.596107  362928 start.go:83] releasing machines lock for "addons-400631", held for 25.832907858s
	I0408 11:13:05.596139  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:05.596442  362928 main.go:141] libmachine: (addons-400631) Calling .GetIP
	I0408 11:13:05.598855  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.599191  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.599219  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.599367  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:05.599854  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:05.600024  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:05.600116  362928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:13:05.600182  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.600233  362928 ssh_runner.go:195] Run: cat /version.json
	I0408 11:13:05.600261  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:05.602575  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.602855  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.602885  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.602957  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.603026  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.603204  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.603341  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.603345  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:05.603490  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:05.603523  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:05.603640  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:05.603857  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:05.604033  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:05.604161  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:05.707730  362928 ssh_runner.go:195] Run: systemctl --version
	I0408 11:13:05.714312  362928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:13:05.720589  362928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:13:05.720642  362928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:13:05.739882  362928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:13:05.739905  362928 start.go:494] detecting cgroup driver to use...
	I0408 11:13:05.739991  362928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 11:13:05.772691  362928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 11:13:05.787469  362928 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:13:05.787542  362928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:13:05.803011  362928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:13:05.818472  362928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:13:05.947858  362928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:13:06.110540  362928 docker.go:233] disabling docker service ...
	I0408 11:13:06.110626  362928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:13:06.126610  362928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:13:06.140778  362928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:13:06.260777  362928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:13:06.377064  362928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:13:06.392016  362928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:13:06.411768  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0408 11:13:06.423612  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 11:13:06.435497  362928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 11:13:06.435540  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 11:13:06.447366  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 11:13:06.459153  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 11:13:06.470916  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 11:13:06.482867  362928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:13:06.494866  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 11:13:06.506578  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 11:13:06.518260  362928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 11:13:06.529893  362928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:13:06.540236  362928 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:13:06.540303  362928 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:13:06.555633  362928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:13:06.565953  362928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:13:06.688987  362928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 11:13:06.719827  362928 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 11:13:06.719928  362928 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 11:13:06.725140  362928 retry.go:31] will retry after 1.104293509s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0408 11:13:07.830426  362928 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 11:13:07.836355  362928 start.go:562] Will wait 60s for crictl version
	I0408 11:13:07.836458  362928 ssh_runner.go:195] Run: which crictl
	I0408 11:13:07.840790  362928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:13:07.876463  362928 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0408 11:13:07.876540  362928 ssh_runner.go:195] Run: containerd --version
	I0408 11:13:07.905953  362928 ssh_runner.go:195] Run: containerd --version
	I0408 11:13:07.934388  362928 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.14 ...
	I0408 11:13:07.935816  362928 main.go:141] libmachine: (addons-400631) Calling .GetIP
	I0408 11:13:07.938371  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:07.938728  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:07.938759  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:07.938971  362928 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:13:07.943326  362928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:13:07.957600  362928 kubeadm.go:877] updating cluster {Name:addons-400631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-400631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:13:07.957728  362928 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 11:13:07.957780  362928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:13:07.992622  362928 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 11:13:07.992689  362928 ssh_runner.go:195] Run: which lz4
	I0408 11:13:07.996837  362928 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 11:13:08.001339  362928 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 11:13:08.001364  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0408 11:13:09.489431  362928 containerd.go:563] duration metric: took 1.492631883s to copy over tarball
	I0408 11:13:09.489511  362928 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 11:13:12.195929  362928 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706375685s)
	I0408 11:13:12.195972  362928 containerd.go:570] duration metric: took 2.706508447s to extract the tarball
	I0408 11:13:12.195980  362928 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 11:13:12.236084  362928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:13:12.354812  362928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 11:13:12.387410  362928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:13:12.445043  362928 retry.go:31] will retry after 346.46588ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T11:13:12Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0408 11:13:12.792717  362928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:13:12.829528  362928 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 11:13:12.829554  362928 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:13:12.829562  362928 kubeadm.go:928] updating node { 192.168.39.102 8443 v1.29.3 containerd true true} ...
	I0408 11:13:12.829735  362928 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-400631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-400631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:13:12.829816  362928 ssh_runner.go:195] Run: sudo crictl info
	I0408 11:13:12.870229  362928 cni.go:84] Creating CNI manager for ""
	I0408 11:13:12.870264  362928 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:13:12.870283  362928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:13:12.870341  362928 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-400631 NodeName:addons-400631 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:13:12.870540  362928 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-400631"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:13:12.870623  362928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:13:12.881915  362928 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:13:12.881996  362928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 11:13:12.893339  362928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0408 11:13:12.911902  362928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:13:12.929751  362928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0408 11:13:12.947585  362928 ssh_runner.go:195] Run: grep 192.168.39.102	control-plane.minikube.internal$ /etc/hosts
	I0408 11:13:12.951687  362928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:13:12.966165  362928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:13:13.086709  362928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:13:13.110041  362928 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631 for IP: 192.168.39.102
	I0408 11:13:13.110074  362928 certs.go:194] generating shared ca certs ...
	I0408 11:13:13.110096  362928 certs.go:226] acquiring lock for ca certs: {Name:mk950d95e2e19a4fbc7b11b065be1fb873491ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.110261  362928 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-354699/.minikube/ca.key
	I0408 11:13:13.307344  362928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt ...
	I0408 11:13:13.307382  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt: {Name:mk139a9e1e327ada546f9662984c53daa0790741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.307591  362928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-354699/.minikube/ca.key ...
	I0408 11:13:13.307609  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/ca.key: {Name:mk947863ab1cbd7158f59c49c89bdbb912c806c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.307722  362928 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.key
	I0408 11:13:13.395380  362928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.crt ...
	I0408 11:13:13.395418  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.crt: {Name:mkfb63f1e21cab1eaf1c8f40b21e39e4f0861655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.395612  362928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.key ...
	I0408 11:13:13.395629  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.key: {Name:mk7f32ae9be471fa704495a27035e48162819165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.395737  362928 certs.go:256] generating profile certs ...
	I0408 11:13:13.395807  362928 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.key
	I0408 11:13:13.395842  362928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt with IP's: []
	I0408 11:13:13.636423  362928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt ...
	I0408 11:13:13.636461  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: {Name:mk48b6ba338230dbd4ab78cff27d7119ea9af3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.636657  362928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.key ...
	I0408 11:13:13.636674  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.key: {Name:mk8fbdaf34f958647ee9ffc43bb61edd8b5267b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.636777  362928 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key.8822dcf7
	I0408 11:13:13.636799  362928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt.8822dcf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102]
	I0408 11:13:13.807184  362928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt.8822dcf7 ...
	I0408 11:13:13.807216  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt.8822dcf7: {Name:mk504eba6e084746cc4d5273d2d0bada3c10e5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.807384  362928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key.8822dcf7 ...
	I0408 11:13:13.807399  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key.8822dcf7: {Name:mk4e4bd9294567f1ada29680174c28dc7de51b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.807470  362928 certs.go:381] copying /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt.8822dcf7 -> /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt
	I0408 11:13:13.807539  362928 certs.go:385] copying /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key.8822dcf7 -> /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key
	I0408 11:13:13.807584  362928 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.key
	I0408 11:13:13.807602  362928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.crt with IP's: []
	I0408 11:13:13.888132  362928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.crt ...
	I0408 11:13:13.888163  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.crt: {Name:mk37d17a4ebe0ed4a64443de103e76b933d0e69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.888316  362928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.key ...
	I0408 11:13:13.888329  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.key: {Name:mkbbae996455089ffb9be3de1a8e41c7fbf91dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:13.888497  362928 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 11:13:13.888532  362928 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/ca.pem (1078 bytes)
	I0408 11:13:13.888556  362928 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:13:13.888577  362928 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-354699/.minikube/certs/key.pem (1679 bytes)
	I0408 11:13:13.889198  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:13:13.925786  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:13:13.951988  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:13:13.976512  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:13:14.000834  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 11:13:14.024966  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:13:14.049152  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:13:14.074054  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:13:14.098108  362928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:13:14.122421  362928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:13:14.140790  362928 ssh_runner.go:195] Run: openssl version
	I0408 11:13:14.146741  362928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:13:14.158901  362928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:13:14.163746  362928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:13:14.163800  362928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:13:14.169676  362928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:13:14.181600  362928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:13:14.186260  362928 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:13:14.186343  362928 kubeadm.go:391] StartCluster: {Name:addons-400631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-400631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:13:14.186433  362928 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 11:13:14.186493  362928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:13:14.227357  362928 cri.go:89] found id: ""
	I0408 11:13:14.227429  362928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 11:13:14.238418  362928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 11:13:14.249275  362928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 11:13:14.260060  362928 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 11:13:14.260082  362928 kubeadm.go:156] found existing configuration files:
	
	I0408 11:13:14.260118  362928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 11:13:14.270524  362928 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 11:13:14.270588  362928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 11:13:14.281241  362928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 11:13:14.292203  362928 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 11:13:14.292248  362928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 11:13:14.303381  362928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 11:13:14.313909  362928 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 11:13:14.313947  362928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 11:13:14.325208  362928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 11:13:14.336022  362928 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 11:13:14.336074  362928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 11:13:14.347417  362928 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 11:13:14.398986  362928 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 11:13:14.399056  362928 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 11:13:14.554558  362928 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 11:13:14.554700  362928 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 11:13:14.554869  362928 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 11:13:14.781606  362928 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 11:13:14.816239  362928 out.go:204]   - Generating certificates and keys ...
	I0408 11:13:14.816349  362928 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 11:13:14.816432  362928 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 11:13:14.922702  362928 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 11:13:15.201477  362928 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 11:13:15.318717  362928 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 11:13:15.438261  362928 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 11:13:15.561574  362928 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 11:13:15.561729  362928 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-400631 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0408 11:13:15.744859  362928 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 11:13:15.745059  362928 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-400631 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0408 11:13:15.859751  362928 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 11:13:15.977371  362928 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 11:13:16.048414  362928 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 11:13:16.048816  362928 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 11:13:16.354030  362928 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 11:13:16.549610  362928 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 11:13:16.737338  362928 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 11:13:16.965713  362928 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 11:13:17.304834  362928 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 11:13:17.305325  362928 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 11:13:17.307690  362928 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 11:13:17.309420  362928 out.go:204]   - Booting up control plane ...
	I0408 11:13:17.309529  362928 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 11:13:17.309613  362928 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 11:13:17.309698  362928 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 11:13:17.329578  362928 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 11:13:17.330297  362928 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 11:13:17.330346  362928 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 11:13:17.463934  362928 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 11:13:23.463472  362928 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002744 seconds
	I0408 11:13:23.480725  362928 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 11:13:23.495007  362928 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 11:13:24.033207  362928 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 11:13:24.033481  362928 kubeadm.go:309] [mark-control-plane] Marking the node addons-400631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 11:13:24.547426  362928 kubeadm.go:309] [bootstrap-token] Using token: brpih0.rpj39n45n5ukhnzl
	I0408 11:13:24.548810  362928 out.go:204]   - Configuring RBAC rules ...
	I0408 11:13:24.548955  362928 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 11:13:24.553934  362928 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 11:13:24.567039  362928 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 11:13:24.577206  362928 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 11:13:24.580452  362928 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 11:13:24.584463  362928 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 11:13:24.600192  362928 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 11:13:24.878793  362928 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 11:13:24.975317  362928 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 11:13:24.976302  362928 kubeadm.go:309] 
	I0408 11:13:24.976402  362928 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 11:13:24.976415  362928 kubeadm.go:309] 
	I0408 11:13:24.976516  362928 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 11:13:24.976528  362928 kubeadm.go:309] 
	I0408 11:13:24.976558  362928 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 11:13:24.976643  362928 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 11:13:24.976731  362928 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 11:13:24.976755  362928 kubeadm.go:309] 
	I0408 11:13:24.976861  362928 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 11:13:24.976880  362928 kubeadm.go:309] 
	I0408 11:13:24.976952  362928 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 11:13:24.976960  362928 kubeadm.go:309] 
	I0408 11:13:24.977023  362928 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 11:13:24.977129  362928 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 11:13:24.977222  362928 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 11:13:24.977233  362928 kubeadm.go:309] 
	I0408 11:13:24.977349  362928 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 11:13:24.977417  362928 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 11:13:24.977426  362928 kubeadm.go:309] 
	I0408 11:13:24.977488  362928 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token brpih0.rpj39n45n5ukhnzl \
	I0408 11:13:24.977567  362928 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:87d17c165dab7675601dd93d7d2d06862f8f0f07cec67cc4bf33ad4438b8083d \
	I0408 11:13:24.977594  362928 kubeadm.go:309] 	--control-plane 
	I0408 11:13:24.977608  362928 kubeadm.go:309] 
	I0408 11:13:24.977700  362928 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 11:13:24.977710  362928 kubeadm.go:309] 
	I0408 11:13:24.977801  362928 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token brpih0.rpj39n45n5ukhnzl \
	I0408 11:13:24.977962  362928 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:87d17c165dab7675601dd93d7d2d06862f8f0f07cec67cc4bf33ad4438b8083d 
	I0408 11:13:24.979206  362928 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 11:13:24.979234  362928 cni.go:84] Creating CNI manager for ""
	I0408 11:13:24.979241  362928 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:13:24.980804  362928 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 11:13:24.982019  362928 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 11:13:24.997107  362928 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 11:13:25.038812  362928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 11:13:25.038955  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:25.038966  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-400631 minikube.k8s.io/updated_at=2024_04_08T11_13_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=addons-400631 minikube.k8s.io/primary=true
	I0408 11:13:25.125702  362928 ops.go:34] apiserver oom_adj: -16
	I0408 11:13:25.300633  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:25.800871  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:26.301365  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:26.800942  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:27.300794  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:27.801233  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:28.301741  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:28.800671  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:29.301691  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:29.801079  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:30.301718  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:30.801195  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:31.301210  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:31.801611  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:32.301523  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:32.801080  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:33.301033  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:33.801196  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:34.301343  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:34.801374  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:35.301286  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:35.800931  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:36.301760  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:36.801360  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:37.301308  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:37.801581  362928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:13:37.929164  362928 kubeadm.go:1107] duration metric: took 12.89029751s to wait for elevateKubeSystemPrivileges
	W0408 11:13:37.929212  362928 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 11:13:37.929221  362928 kubeadm.go:393] duration metric: took 23.742882983s to StartCluster
	I0408 11:13:37.929243  362928 settings.go:142] acquiring lock: {Name:mk778f9866a799b5fa2941a21bff69ba7144f2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:37.929372  362928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:13:37.929796  362928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/kubeconfig: {Name:mk788bbf293e278b977c1f86c4d69aa553f3cc07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:13:37.930035  362928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 11:13:37.930137  362928 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 11:13:37.931778  362928 out.go:177] * Verifying Kubernetes components...
	I0408 11:13:37.930207  362928 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0408 11:13:37.930323  362928 config.go:182] Loaded profile config "addons-400631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:13:37.933164  362928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:13:37.933180  362928 addons.go:69] Setting cloud-spanner=true in profile "addons-400631"
	I0408 11:13:37.933179  362928 addons.go:69] Setting ingress-dns=true in profile "addons-400631"
	I0408 11:13:37.933190  362928 addons.go:69] Setting yakd=true in profile "addons-400631"
	I0408 11:13:37.933219  362928 addons.go:69] Setting gcp-auth=true in profile "addons-400631"
	I0408 11:13:37.933238  362928 addons.go:69] Setting default-storageclass=true in profile "addons-400631"
	I0408 11:13:37.933263  362928 addons.go:234] Setting addon yakd=true in "addons-400631"
	I0408 11:13:37.933278  362928 addons.go:69] Setting registry=true in profile "addons-400631"
	I0408 11:13:37.933283  362928 mustload.go:65] Loading cluster: addons-400631
	I0408 11:13:37.933284  362928 addons.go:69] Setting inspektor-gadget=true in profile "addons-400631"
	I0408 11:13:37.933297  362928 addons.go:234] Setting addon registry=true in "addons-400631"
	I0408 11:13:37.933304  362928 addons.go:234] Setting addon inspektor-gadget=true in "addons-400631"
	I0408 11:13:37.933308  362928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-400631"
	I0408 11:13:37.933315  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933332  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933338  362928 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-400631"
	I0408 11:13:37.933358  362928 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-400631"
	I0408 11:13:37.933501  362928 config.go:182] Loaded profile config "addons-400631": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:13:37.933591  362928 addons.go:69] Setting storage-provisioner=true in profile "addons-400631"
	I0408 11:13:37.933651  362928 addons.go:234] Setting addon storage-provisioner=true in "addons-400631"
	I0408 11:13:37.933636  362928 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-400631"
	I0408 11:13:37.933690  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933686  362928 addons.go:69] Setting volumesnapshots=true in profile "addons-400631"
	I0408 11:13:37.933704  362928 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-400631"
	I0408 11:13:37.933721  362928 addons.go:234] Setting addon volumesnapshots=true in "addons-400631"
	I0408 11:13:37.933756  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933767  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933800  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.933826  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.933881  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.933886  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.933905  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.933171  362928 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-400631"
	I0408 11:13:37.933949  362928 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-400631"
	I0408 11:13:37.933264  362928 addons.go:234] Setting addon cloud-spanner=true in "addons-400631"
	I0408 11:13:37.933270  362928 addons.go:69] Setting helm-tiller=true in profile "addons-400631"
	I0408 11:13:37.933977  362928 addons.go:234] Setting addon helm-tiller=true in "addons-400631"
	I0408 11:13:37.933999  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.934026  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934048  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934073  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934118  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.933906  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934119  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.933279  362928 addons.go:69] Setting metrics-server=true in profile "addons-400631"
	I0408 11:13:37.934173  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934177  362928 addons.go:234] Setting addon metrics-server=true in "addons-400631"
	I0408 11:13:37.933332  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.934370  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.933272  362928 addons.go:69] Setting ingress=true in profile "addons-400631"
	I0408 11:13:37.934457  362928 addons.go:234] Setting addon ingress=true in "addons-400631"
	I0408 11:13:37.933264  362928 addons.go:234] Setting addon ingress-dns=true in "addons-400631"
	I0408 11:13:37.934535  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934555  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934563  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.934535  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934598  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.934721  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934758  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934792  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934809  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934931  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.934601  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.934968  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.935031  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.935066  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.935111  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.937984  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.938461  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.938485  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.955043  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0408 11:13:37.955113  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I0408 11:13:37.955226  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0408 11:13:37.955254  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0408 11:13:37.955801  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.955818  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.955800  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.956031  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.956437  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.956457  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.956546  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.956558  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.956565  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.956583  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.956583  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.956594  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.956825  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.956897  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.957095  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:37.957145  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.957502  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.957546  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.957699  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.957726  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.958345  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.960810  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0408 11:13:37.965420  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I0408 11:13:37.967197  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.967231  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.968646  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.968696  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.969558  362928 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-400631"
	I0408 11:13:37.969609  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:37.969674  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.969698  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.969990  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.970039  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.970500  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.970647  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.971035  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0408 11:13:37.971329  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.971346  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.971502  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.971514  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.971713  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.972286  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.972326  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.972484  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.972558  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.973044  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.973060  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.973892  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.973920  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.979260  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.980185  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.980239  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.989497  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0408 11:13:37.990248  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.991077  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.991099  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.991565  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.992186  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.992225  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:37.995582  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0408 11:13:37.995597  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0408 11:13:37.996152  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.996154  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:37.996693  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.996717  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.996858  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:37.996873  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:37.997260  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:37.997877  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:37.997917  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.003074  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.003374  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.004599  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0408 11:13:38.004944  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0408 11:13:38.006866  362928 addons.go:234] Setting addon default-storageclass=true in "addons-400631"
	I0408 11:13:38.006914  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:38.007290  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.007329  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.010408  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.010426  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.010977  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.010998  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.011569  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.012168  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.012212  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.018980  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33187
	I0408 11:13:38.019276  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.019296  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.019805  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.019882  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.019965  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
	I0408 11:13:38.020288  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.021057  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.021075  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.021579  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.021670  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.022199  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.022237  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.024370  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0408 11:13:38.024503  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.024639  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.024651  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.024711  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0408 11:13:38.026656  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 11:13:38.028317  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 11:13:38.028342  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 11:13:38.028365  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.027129  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0408 11:13:38.027156  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.027164  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.028951  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I0408 11:13:38.029376  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.029445  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.029462  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.030008  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.030075  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.030137  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.030404  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.030573  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.030749  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.030766  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.031064  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0408 11:13:38.031417  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.031434  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.031471  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.031487  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.031488  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.032069  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.032095  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.032120  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.032187  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0408 11:13:38.032460  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.032549  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.033139  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.033243  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0408 11:13:38.033592  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.033848  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.033867  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.033927  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.033979  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.034262  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.034466  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.034712  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.034820  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.034902  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0408 11:13:38.035328  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.035528  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.035548  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.035820  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.035839  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.035898  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.035903  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.035972  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.036209  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.037666  362928 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0408 11:13:38.036262  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.036290  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.037243  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.037925  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.038199  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.038947  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.039077  362928 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 11:13:38.039088  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 11:13:38.039104  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.039185  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.040117  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.041467  362928 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 11:13:38.040397  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.040419  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.041247  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.041289  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.043788  362928 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0408 11:13:38.042771  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0408 11:13:38.042905  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.043246  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.043271  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.043308  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I0408 11:13:38.044012  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.044036  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.044883  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.044985  362928 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 11:13:38.045485  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.046060  362928 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0408 11:13:38.048527  362928 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 11:13:38.048545  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 11:13:38.047153  362928 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0408 11:13:38.048564  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.046246  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.047303  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0408 11:13:38.046093  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.047771  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.048004  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.048043  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.049805  362928 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 11:13:38.050077  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.050091  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.050682  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0408 11:13:38.050723  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.051433  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.051671  362928 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 11:13:38.051763  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.053515  362928 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:13:38.053528  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 11:13:38.053542  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.051838  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 11:13:38.053584  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.052197  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.052218  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.053630  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.052232  362928 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0408 11:13:38.052294  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.052365  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.052599  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.052726  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.053916  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.055142  362928 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0408 11:13:38.055154  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0408 11:13:38.055171  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.055215  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.055592  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.055640  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.055921  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.055933  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.056354  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.056587  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.056608  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.056915  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.057678  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.057840  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:38.058100  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:38.058124  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:38.058408  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.059299  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.059324  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.059336  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0408 11:13:38.059447  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.059460  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.059480  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.059540  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.059777  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.059955  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.060486  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.060514  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.061014  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.061115  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.061313  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.061797  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.061831  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.061882  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.061898  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.061955  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.062318  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.062330  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.062348  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.062378  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.062372  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.062390  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.062591  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.062696  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.062822  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.062851  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.062885  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.063092  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.063631  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.063858  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.065722  362928 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0408 11:13:38.066925  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.065163  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.067030  362928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0408 11:13:38.067049  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0408 11:13:38.067068  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.068609  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 11:13:38.067483  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.067967  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I0408 11:13:38.069161  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0408 11:13:38.071013  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 11:13:38.070083  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.070647  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.071276  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.072217  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 11:13:38.073564  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 11:13:38.072242  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.072017  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0408 11:13:38.071586  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.072407  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.072757  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.073603  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.073708  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.073864  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.074128  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.074134  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.074595  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.075159  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.075036  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 11:13:38.075229  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.075276  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.075578  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.076414  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 11:13:38.078060  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 11:13:38.075847  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.077184  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.078200  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.080312  362928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 11:13:38.079188  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.079506  362928 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 11:13:38.081009  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.081171  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0408 11:13:38.081621  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 11:13:38.081637  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 11:13:38.081636  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 11:13:38.081658  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.081661  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.082590  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.083780  362928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:13:38.082912  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.083016  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.084951  362928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:13:38.086192  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.085248  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.085503  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.085872  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.086722  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.087596  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.087628  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.086976  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.087640  362928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0408 11:13:38.087687  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.087413  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.089072  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.087009  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0408 11:13:38.087801  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.087820  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.088215  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.089287  362928 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 11:13:38.089305  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 11:13:38.090621  362928 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0408 11:13:38.089338  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.089478  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.089517  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.089532  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.089566  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.091878  362928 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 11:13:38.091897  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 11:13:38.091920  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.092118  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.092118  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.092900  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.092928  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.093443  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.093627  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:38.094137  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.095645  362928 out.go:177]   - Using image docker.io/busybox:stable
	I0408 11:13:38.096897  362928 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 11:13:38.095936  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.096139  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I0408 11:13:38.096741  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.097774  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.098051  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.098073  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.098093  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.098213  362928 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 11:13:38.098226  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 11:13:38.098243  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.098404  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.098421  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.098438  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.099825  362928 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0408 11:13:38.098537  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.098555  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.098784  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:38.101122  362928 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0408 11:13:38.101145  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 11:13:38.101164  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:38.101260  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.101257  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.101571  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.101675  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.102047  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.102213  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.102250  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.102332  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.102375  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.102530  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:38.102684  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:38.103287  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:38.103310  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:38.103715  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:38.103908  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:38.104322  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.104754  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:38.104777  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:38.104948  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:38.105088  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:38.105216  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	W0408 11:13:38.105345  362928 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44616->192.168.39.102:22: read: connection reset by peer
	I0408 11:13:38.105374  362928 retry.go:31] will retry after 203.792874ms: ssh: handshake failed: read tcp 192.168.39.1:44616->192.168.39.102:22: read: connection reset by peer
	I0408 11:13:38.105418  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	W0408 11:13:38.105806  362928 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44624->192.168.39.102:22: read: connection reset by peer
	I0408 11:13:38.105834  362928 retry.go:31] will retry after 300.540374ms: ssh: handshake failed: read tcp 192.168.39.1:44624->192.168.39.102:22: read: connection reset by peer
	W0408 11:13:38.108238  362928 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44634->192.168.39.102:22: read: connection reset by peer
	I0408 11:13:38.108262  362928 retry.go:31] will retry after 260.278865ms: ssh: handshake failed: read tcp 192.168.39.1:44634->192.168.39.102:22: read: connection reset by peer
	I0408 11:13:38.586850  362928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0408 11:13:38.586881  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0408 11:13:38.648813  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:13:38.689572  362928 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0408 11:13:38.689602  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0408 11:13:38.717685  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 11:13:38.732395  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 11:13:38.747833  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 11:13:38.770159  362928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 11:13:38.770196  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 11:13:38.810970  362928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 11:13:38.811006  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0408 11:13:38.840858  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 11:13:38.840896  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 11:13:38.867513  362928 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 11:13:38.867551  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 11:13:38.869936  362928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 11:13:38.869972  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 11:13:38.872580  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 11:13:38.889578  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 11:13:38.970402  362928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 11:13:38.970437  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 11:13:38.973276  362928 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 11:13:38.973303  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 11:13:38.982375  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 11:13:38.988661  362928 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0408 11:13:38.988687  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0408 11:13:39.000184  362928 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.070108548s)
	I0408 11:13:39.000254  362928 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.067061521s)
	I0408 11:13:39.000317  362928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:13:39.000401  362928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 11:13:39.177407  362928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 11:13:39.177449  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 11:13:39.252675  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 11:13:39.252717  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 11:13:39.337291  362928 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 11:13:39.337324  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 11:13:39.404172  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 11:13:39.432349  362928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 11:13:39.432389  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 11:13:39.487469  362928 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0408 11:13:39.487500  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0408 11:13:39.669521  362928 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 11:13:39.669550  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 11:13:39.798117  362928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 11:13:39.798145  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 11:13:39.812561  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 11:13:39.812594  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 11:13:39.876404  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 11:13:39.952439  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 11:13:39.952470  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 11:13:40.094664  362928 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0408 11:13:40.094705  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0408 11:13:40.150664  362928 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 11:13:40.150697  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 11:13:40.191679  362928 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:13:40.191707  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 11:13:40.273257  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 11:13:40.453367  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 11:13:40.453407  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 11:13:40.527813  362928 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0408 11:13:40.527840  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0408 11:13:40.571560  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:13:40.642045  362928 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 11:13:40.642075  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 11:13:40.740321  362928 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 11:13:40.740359  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0408 11:13:40.758084  362928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 11:13:40.758109  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 11:13:40.851392  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 11:13:41.050163  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 11:13:41.050195  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 11:13:41.051162  362928 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 11:13:41.051181  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0408 11:13:41.482431  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 11:13:41.557403  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 11:13:41.557438  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 11:13:41.941952  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 11:13:41.941993  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 11:13:42.239374  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 11:13:42.239415  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 11:13:42.392264  362928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 11:13:42.392292  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 11:13:42.624281  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 11:13:44.161469  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.443730603s)
	I0408 11:13:44.161504  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.512648758s)
	I0408 11:13:44.161534  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:44.161547  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:44.161557  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:44.161574  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:44.161849  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:44.161870  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:44.161881  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:44.161889  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:44.162082  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:44.162103  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:44.162108  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:44.162121  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:44.162121  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:44.162124  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:44.162136  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:44.162144  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:44.162417  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:44.162422  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:44.162436  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:44.917258  362928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 11:13:44.917312  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:44.921034  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:44.921573  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:44.921603  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:44.921773  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:44.921996  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:44.922213  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:44.922374  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:45.417714  362928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 11:13:45.461797  362928 addons.go:234] Setting addon gcp-auth=true in "addons-400631"
	I0408 11:13:45.461874  362928 host.go:66] Checking if "addons-400631" exists ...
	I0408 11:13:45.462320  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:45.462364  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:45.498247  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0408 11:13:45.498766  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:45.499328  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:45.499352  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:45.499780  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:45.500277  362928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:13:45.500327  362928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:13:45.515397  362928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0408 11:13:45.515907  362928 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:13:45.516413  362928 main.go:141] libmachine: Using API Version  1
	I0408 11:13:45.516439  362928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:13:45.516804  362928 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:13:45.517019  362928 main.go:141] libmachine: (addons-400631) Calling .GetState
	I0408 11:13:45.518556  362928 main.go:141] libmachine: (addons-400631) Calling .DriverName
	I0408 11:13:45.518849  362928 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 11:13:45.518876  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHHostname
	I0408 11:13:45.521680  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:45.522132  362928 main.go:141] libmachine: (addons-400631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:31:7f", ip: ""} in network mk-addons-400631: {Iface:virbr1 ExpiryTime:2024-04-08 12:12:55 +0000 UTC Type:0 Mac:52:54:00:ec:31:7f Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-400631 Clientid:01:52:54:00:ec:31:7f}
	I0408 11:13:45.522162  362928 main.go:141] libmachine: (addons-400631) DBG | domain addons-400631 has defined IP address 192.168.39.102 and MAC address 52:54:00:ec:31:7f in network mk-addons-400631
	I0408 11:13:45.522350  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHPort
	I0408 11:13:45.522537  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHKeyPath
	I0408 11:13:45.522707  362928 main.go:141] libmachine: (addons-400631) Calling .GetSSHUsername
	I0408 11:13:45.522888  362928 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/addons-400631/id_rsa Username:docker}
	I0408 11:13:48.273775  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.541336206s)
	I0408 11:13:48.273841  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.273847  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.525970028s)
	I0408 11:13:48.273891  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.401282297s)
	I0408 11:13:48.273855  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.273929  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.273945  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.273900  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.273961  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.273978  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.291574099s)
	I0408 11:13:48.273928  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.384316687s)
	I0408 11:13:48.274006  362928 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.273669528s)
	I0408 11:13:48.274014  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274017  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274022  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274029  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274040  362928 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.273617294s)
	I0408 11:13:48.274060  362928 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 11:13:48.274101  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.869900391s)
	I0408 11:13:48.274117  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274123  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274195  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.397737324s)
	I0408 11:13:48.274223  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274234  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274341  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.001056642s)
	I0408 11:13:48.274357  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274363  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274473  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.702877599s)
	W0408 11:13:48.274542  362928 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 11:13:48.274568  362928 retry.go:31] will retry after 158.964075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 11:13:48.274624  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.423201762s)
	I0408 11:13:48.274642  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274652  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.274771  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.792298097s)
	I0408 11:13:48.274791  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.274799  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.275157  362928 node_ready.go:35] waiting up to 6m0s for node "addons-400631" to be "Ready" ...
	I0408 11:13:48.278882  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.278897  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278906  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278922  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.278931  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278940  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.278944  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.278949  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.278950  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.278958  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278963  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.278965  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.278968  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.278977  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.278988  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278999  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279003  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279009  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279013  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279019  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279027  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.278909  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279035  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279038  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279045  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279047  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279052  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279065  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279066  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279074  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279027  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279083  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279095  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279097  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279103  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279106  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279113  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279119  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279085  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279076  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279198  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.279208  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.279254  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279293  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279329  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279341  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279360  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279360  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279370  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279372  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279376  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279385  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.279416  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279427  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.281194  362928 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-400631 service yakd-dashboard -n yakd-dashboard
	
	I0408 11:13:48.279606  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279641  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279663  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279681  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.279706  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.279717  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.281424  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.281432  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.281440  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.282830  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.281452  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.281462  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.282925  362928 addons.go:470] Verifying addon registry=true in "addons-400631"
	I0408 11:13:48.281619  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.281638  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.281882  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.281890  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.282770  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.284325  362928 out.go:177] * Verifying registry addon...
	I0408 11:13:48.284329  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.284337  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.284351  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.285923  362928 addons.go:470] Verifying addon metrics-server=true in "addons-400631"
	I0408 11:13:48.286170  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.286200  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.286217  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.286226  362928 addons.go:470] Verifying addon ingress=true in "addons-400631"
	I0408 11:13:48.287474  362928 out.go:177] * Verifying ingress addon...
	I0408 11:13:48.286690  362928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 11:13:48.289786  362928 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 11:13:48.294927  362928 node_ready.go:49] node "addons-400631" has status "Ready":"True"
	I0408 11:13:48.294947  362928 node_ready.go:38] duration metric: took 19.764649ms for node "addons-400631" to be "Ready" ...
	I0408 11:13:48.294968  362928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:13:48.333084  362928 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 11:13:48.333109  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:48.334970  362928 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 11:13:48.334995  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:48.337173  362928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-77sc7" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.340881  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.340898  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.341262  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:48.341282  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:48.341287  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:48.341374  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.341388  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:48.341523  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:48.341536  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	W0408 11:13:48.341622  362928 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0408 11:13:48.404775  362928 pod_ready.go:92] pod "coredns-76f75df574-77sc7" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:48.404804  362928 pod_ready.go:81] duration metric: took 67.600602ms for pod "coredns-76f75df574-77sc7" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.404821  362928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qxns4" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.434723  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:13:48.480200  362928 pod_ready.go:92] pod "coredns-76f75df574-qxns4" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:48.480228  362928 pod_ready.go:81] duration metric: took 75.401484ms for pod "coredns-76f75df574-qxns4" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.480239  362928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.539680  362928 pod_ready.go:92] pod "etcd-addons-400631" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:48.539708  362928 pod_ready.go:81] duration metric: took 59.463229ms for pod "etcd-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.539717  362928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.587708  362928 pod_ready.go:92] pod "kube-apiserver-addons-400631" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:48.587736  362928 pod_ready.go:81] duration metric: took 48.010736ms for pod "kube-apiserver-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.587751  362928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.679424  362928 pod_ready.go:92] pod "kube-controller-manager-addons-400631" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:48.679451  362928 pod_ready.go:81] duration metric: took 91.692767ms for pod "kube-controller-manager-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.679464  362928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6r2k" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:48.784146  362928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-400631" context rescaled to 1 replicas
	I0408 11:13:48.796507  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:48.807062  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:49.118398  362928 pod_ready.go:92] pod "kube-proxy-s6r2k" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:49.118427  362928 pod_ready.go:81] duration metric: took 438.956103ms for pod "kube-proxy-s6r2k" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:49.118437  362928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:49.290630  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.666297855s)
	I0408 11:13:49.290683  362928 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.77180979s)
	I0408 11:13:49.292491  362928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:13:49.290682  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:49.293884  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:49.295120  362928 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0408 11:13:49.296348  362928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 11:13:49.296367  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 11:13:49.294279  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:49.296414  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:49.296433  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:49.294324  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:49.296442  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:49.296844  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:49.296879  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:49.296895  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:49.296908  362928 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-400631"
	I0408 11:13:49.298156  362928 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 11:13:49.300199  362928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 11:13:49.324861  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:49.339648  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:49.353767  362928 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 11:13:49.353791  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:49.480047  362928 pod_ready.go:92] pod "kube-scheduler-addons-400631" in "kube-system" namespace has status "Ready":"True"
	I0408 11:13:49.480089  362928 pod_ready.go:81] duration metric: took 361.643019ms for pod "kube-scheduler-addons-400631" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:49.480106  362928 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace to be "Ready" ...
	I0408 11:13:49.508781  362928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 11:13:49.508812  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 11:13:49.608598  362928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 11:13:49.608624  362928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 11:13:49.692608  362928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 11:13:49.793638  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:49.798372  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:49.807776  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:50.293572  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:50.295713  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:50.305838  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:50.701205  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.26642294s)
	I0408 11:13:50.701289  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:50.701304  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:50.701639  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:50.701662  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:50.701672  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:50.701681  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:50.701639  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:50.701943  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:50.701964  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:50.701978  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:50.795832  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:50.798021  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:50.806010  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:51.234583  362928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.541908697s)
	I0408 11:13:51.234660  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:51.234675  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:51.235030  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:51.235028  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:51.235066  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:51.235076  362928 main.go:141] libmachine: Making call to close driver server
	I0408 11:13:51.235089  362928 main.go:141] libmachine: (addons-400631) Calling .Close
	I0408 11:13:51.235498  362928 main.go:141] libmachine: (addons-400631) DBG | Closing plugin on server side
	I0408 11:13:51.235545  362928 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:13:51.235556  362928 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:13:51.238280  362928 addons.go:470] Verifying addon gcp-auth=true in "addons-400631"
	I0408 11:13:51.240385  362928 out.go:177] * Verifying gcp-auth addon...
	I0408 11:13:51.242700  362928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 11:13:51.269878  362928 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 11:13:51.269905  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:51.294236  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:51.295800  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:51.323643  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:51.487507  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:13:51.748644  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:51.808492  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:51.808862  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:51.823545  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:52.246309  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:52.296552  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:52.296721  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:52.306359  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:52.746681  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:52.794830  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:52.799103  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:52.816117  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:53.251531  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:53.292608  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:53.296432  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:53.305119  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:53.746487  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:53.793085  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:53.795964  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:53.805174  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:53.986303  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:13:54.250029  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:54.292725  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:54.295309  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:54.307171  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:54.750488  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:54.792368  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:54.795229  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:54.810288  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:55.246483  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:55.292666  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:55.294382  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:55.304852  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:55.748972  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:55.792893  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:55.794531  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:55.806767  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:56.247708  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:56.292952  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:56.295048  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:56.305718  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:56.487236  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:13:56.746904  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:56.796289  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:56.798525  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:56.807653  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:57.247789  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:57.301224  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:57.302059  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:57.305780  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:57.747721  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:57.794132  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:57.796211  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:57.805011  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:58.247716  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:58.293812  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:58.296213  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:58.306523  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:58.747875  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:58.792829  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:58.796067  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:58.806698  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:58.986995  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:13:59.246700  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:59.295281  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:59.297490  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:59.308009  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:13:59.747461  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:13:59.795940  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:13:59.796080  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:13:59.807254  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:00.249467  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:00.295915  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:00.296733  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:00.308178  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:00.746764  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:00.793061  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:00.796478  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:00.805115  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:01.247255  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:01.293471  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:01.295411  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:01.310186  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:01.487357  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:01.746891  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:01.791796  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:01.794434  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:01.810107  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:02.247161  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:02.293411  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:02.294719  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:02.307023  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:02.748467  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:02.794012  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:02.795528  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:02.810456  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:03.247287  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:03.295633  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:03.295859  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:03.306621  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:03.745257  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:03.747195  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:03.794185  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:03.796084  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:03.805506  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:04.247587  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:04.298039  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:04.305130  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:04.310234  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:04.749309  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:04.795370  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:04.796339  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:04.808012  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:05.247290  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:05.294104  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:05.294649  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:05.315153  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:05.746953  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:05.793185  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:05.795232  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:05.808997  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:05.987404  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:06.247119  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:06.292309  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:06.294859  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:06.308669  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:06.747564  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:06.803808  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:06.805440  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:06.809888  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:07.260171  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:07.296089  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:07.312536  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:07.317893  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:07.747611  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:07.794452  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:07.796898  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:07.804526  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:08.249258  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:08.292264  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:08.294967  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:08.304705  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:08.487565  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:08.747050  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:08.792650  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:08.794183  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:08.805654  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:09.246598  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:09.293130  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:09.296002  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:09.305341  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:09.747444  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:09.795460  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:09.799353  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:09.806013  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:10.487816  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:10.490873  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:10.492716  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:10.492942  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:10.493462  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:10.746759  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:10.793330  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:10.796144  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:10.805228  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:11.248453  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:11.293164  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:11.296044  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:11.304669  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:11.746889  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:11.792527  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:11.795065  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:11.804687  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:12.246625  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:12.292758  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:12.294628  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:12.305992  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:12.747447  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:12.792537  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:12.795199  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:12.805065  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:12.986537  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:13.247273  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:13.293620  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:13.294532  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:13.312886  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:13.747637  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:13.792890  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:13.796052  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:13.804896  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:14.247740  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:14.294446  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:14.294576  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:14.308536  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:14.747266  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:14.792835  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:14.794371  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:14.805761  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:14.986687  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:15.247469  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:15.293401  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:15.295102  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:15.305321  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:15.747103  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:15.794988  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:15.796640  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:15.805055  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:16.247940  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:16.294891  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:16.299274  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:16.306271  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:16.747095  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:16.792634  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:16.795695  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:16.809275  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:16.987858  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:17.247155  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:17.294901  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:17.296169  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:17.304975  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:17.747374  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:17.792399  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:17.795094  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:17.804463  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:18.247300  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:18.292715  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:18.294155  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:18.306772  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:18.748246  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:18.793431  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:18.795284  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:18.804715  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:18.993728  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:19.245828  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:19.293102  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:19.295463  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:19.309206  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:19.747086  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:19.793674  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:19.794153  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:19.805070  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:20.248640  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:20.293379  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:20.295915  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:20.306026  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:20.746650  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:20.792788  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:20.794767  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:20.805348  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:21.248075  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:21.293605  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:14:21.295776  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:21.306425  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:21.487990  362928 pod_ready.go:102] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"False"
	I0408 11:14:21.745481  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:21.793791  362928 kapi.go:107] duration metric: took 33.507098103s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 11:14:21.796067  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:21.804919  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:22.246240  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:22.294132  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:22.306374  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:22.746461  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:22.797110  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:22.806463  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:23.252562  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:23.293888  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:23.305147  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:23.746598  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:23.794708  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:23.806621  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:23.989391  362928 pod_ready.go:92] pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace has status "Ready":"True"
	I0408 11:14:23.989426  362928 pod_ready.go:81] duration metric: took 34.509311063s for pod "metrics-server-75d6c48ddd-h8wgl" in "kube-system" namespace to be "Ready" ...
	I0408 11:14:23.989440  362928 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gg2h7" in "kube-system" namespace to be "Ready" ...
	I0408 11:14:23.994607  362928 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gg2h7" in "kube-system" namespace has status "Ready":"True"
	I0408 11:14:23.994626  362928 pod_ready.go:81] duration metric: took 5.177975ms for pod "nvidia-device-plugin-daemonset-gg2h7" in "kube-system" namespace to be "Ready" ...
	I0408 11:14:23.994644  362928 pod_ready.go:38] duration metric: took 35.699657926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:14:23.994684  362928 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:14:23.994736  362928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:14:24.023162  362928 api_server.go:72] duration metric: took 46.092946035s to wait for apiserver process to appear ...
	I0408 11:14:24.023188  362928 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:14:24.023204  362928 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0408 11:14:24.030320  362928 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0408 11:14:24.032290  362928 api_server.go:141] control plane version: v1.29.3
	I0408 11:14:24.032396  362928 api_server.go:131] duration metric: took 9.130598ms to wait for apiserver health ...
	I0408 11:14:24.032428  362928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:14:24.065896  362928 system_pods.go:59] 18 kube-system pods found
	I0408 11:14:24.065929  362928 system_pods.go:61] "coredns-76f75df574-77sc7" [f0ac2f5a-f9f1-492a-b76f-fe2e42d7716b] Running
	I0408 11:14:24.065940  362928 system_pods.go:61] "csi-hostpath-attacher-0" [cb0a2b67-e88b-454f-a9b5-bd129a0fb36a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 11:14:24.065948  362928 system_pods.go:61] "csi-hostpath-resizer-0" [9afc5b55-24d0-49b6-9a37-1cc846b6ead9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 11:14:24.065957  362928 system_pods.go:61] "csi-hostpathplugin-cq8xv" [64ca9b0c-1b7a-4cf0-b0c3-9f9839a5135b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 11:14:24.065965  362928 system_pods.go:61] "etcd-addons-400631" [261c76e8-5c86-4a0b-958a-2c032f83f7b3] Running
	I0408 11:14:24.065970  362928 system_pods.go:61] "kube-apiserver-addons-400631" [d45d1d62-14f5-4185-a72e-3d01709db1a4] Running
	I0408 11:14:24.065975  362928 system_pods.go:61] "kube-controller-manager-addons-400631" [086dfae7-64de-4b1d-bf3d-f03ad3bd63e9] Running
	I0408 11:14:24.065981  362928 system_pods.go:61] "kube-ingress-dns-minikube" [99f3402e-2e51-4bec-b4c9-c371a7f13765] Running
	I0408 11:14:24.065987  362928 system_pods.go:61] "kube-proxy-s6r2k" [0cd3fbcf-1b07-4016-a55f-67576d93e7f6] Running
	I0408 11:14:24.065995  362928 system_pods.go:61] "kube-scheduler-addons-400631" [36af9df6-dd5c-48f1-94ac-a488e5c3f38f] Running
	I0408 11:14:24.066000  362928 system_pods.go:61] "metrics-server-75d6c48ddd-h8wgl" [329b40d1-d2e7-45b7-a96d-a64185bed172] Running
	I0408 11:14:24.066006  362928 system_pods.go:61] "nvidia-device-plugin-daemonset-gg2h7" [8391958b-ee3e-47ec-a464-3008401c9c38] Running
	I0408 11:14:24.066012  362928 system_pods.go:61] "registry-8dz4c" [128f4451-1f0a-4fd0-909f-96eb66c6de4c] Running
	I0408 11:14:24.066020  362928 system_pods.go:61] "registry-proxy-58x4t" [8f5c6ca1-70a0-4f77-889d-7215671cbab3] Running
	I0408 11:14:24.066028  362928 system_pods.go:61] "snapshot-controller-58dbcc7b99-q9fvl" [1b59eb00-9322-427c-8d92-e4dd33823347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:14:24.066037  362928 system_pods.go:61] "snapshot-controller-58dbcc7b99-s8mjc" [3ca5297f-199c-484f-99b4-2b865ae780e4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:14:24.066050  362928 system_pods.go:61] "storage-provisioner" [d5d55adc-9648-4b7f-8015-5fc6cffcbb35] Running
	I0408 11:14:24.066056  362928 system_pods.go:61] "tiller-deploy-7b677967b9-8rp29" [339d3599-80be-4668-bd18-039f66262f77] Running
	I0408 11:14:24.066065  362928 system_pods.go:74] duration metric: took 33.623717ms to wait for pod list to return data ...
	I0408 11:14:24.066081  362928 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:14:24.070062  362928 default_sa.go:45] found service account: "default"
	I0408 11:14:24.070083  362928 default_sa.go:55] duration metric: took 3.990597ms for default service account to be created ...
	I0408 11:14:24.070092  362928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:14:24.090047  362928 system_pods.go:86] 18 kube-system pods found
	I0408 11:14:24.090077  362928 system_pods.go:89] "coredns-76f75df574-77sc7" [f0ac2f5a-f9f1-492a-b76f-fe2e42d7716b] Running
	I0408 11:14:24.090089  362928 system_pods.go:89] "csi-hostpath-attacher-0" [cb0a2b67-e88b-454f-a9b5-bd129a0fb36a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 11:14:24.090098  362928 system_pods.go:89] "csi-hostpath-resizer-0" [9afc5b55-24d0-49b6-9a37-1cc846b6ead9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 11:14:24.090109  362928 system_pods.go:89] "csi-hostpathplugin-cq8xv" [64ca9b0c-1b7a-4cf0-b0c3-9f9839a5135b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 11:14:24.090116  362928 system_pods.go:89] "etcd-addons-400631" [261c76e8-5c86-4a0b-958a-2c032f83f7b3] Running
	I0408 11:14:24.090122  362928 system_pods.go:89] "kube-apiserver-addons-400631" [d45d1d62-14f5-4185-a72e-3d01709db1a4] Running
	I0408 11:14:24.090128  362928 system_pods.go:89] "kube-controller-manager-addons-400631" [086dfae7-64de-4b1d-bf3d-f03ad3bd63e9] Running
	I0408 11:14:24.090137  362928 system_pods.go:89] "kube-ingress-dns-minikube" [99f3402e-2e51-4bec-b4c9-c371a7f13765] Running
	I0408 11:14:24.090143  362928 system_pods.go:89] "kube-proxy-s6r2k" [0cd3fbcf-1b07-4016-a55f-67576d93e7f6] Running
	I0408 11:14:24.090150  362928 system_pods.go:89] "kube-scheduler-addons-400631" [36af9df6-dd5c-48f1-94ac-a488e5c3f38f] Running
	I0408 11:14:24.090156  362928 system_pods.go:89] "metrics-server-75d6c48ddd-h8wgl" [329b40d1-d2e7-45b7-a96d-a64185bed172] Running
	I0408 11:14:24.090163  362928 system_pods.go:89] "nvidia-device-plugin-daemonset-gg2h7" [8391958b-ee3e-47ec-a464-3008401c9c38] Running
	I0408 11:14:24.090169  362928 system_pods.go:89] "registry-8dz4c" [128f4451-1f0a-4fd0-909f-96eb66c6de4c] Running
	I0408 11:14:24.090175  362928 system_pods.go:89] "registry-proxy-58x4t" [8f5c6ca1-70a0-4f77-889d-7215671cbab3] Running
	I0408 11:14:24.090189  362928 system_pods.go:89] "snapshot-controller-58dbcc7b99-q9fvl" [1b59eb00-9322-427c-8d92-e4dd33823347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:14:24.090199  362928 system_pods.go:89] "snapshot-controller-58dbcc7b99-s8mjc" [3ca5297f-199c-484f-99b4-2b865ae780e4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:14:24.090207  362928 system_pods.go:89] "storage-provisioner" [d5d55adc-9648-4b7f-8015-5fc6cffcbb35] Running
	I0408 11:14:24.090214  362928 system_pods.go:89] "tiller-deploy-7b677967b9-8rp29" [339d3599-80be-4668-bd18-039f66262f77] Running
	I0408 11:14:24.090224  362928 system_pods.go:126] duration metric: took 20.125383ms to wait for k8s-apps to be running ...
	I0408 11:14:24.090232  362928 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:14:24.090324  362928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:14:24.114097  362928 system_svc.go:56] duration metric: took 23.851015ms WaitForService to wait for kubelet
	I0408 11:14:24.114139  362928 kubeadm.go:576] duration metric: took 46.183926446s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:14:24.114164  362928 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:14:24.117830  362928 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:14:24.117865  362928 node_conditions.go:123] node cpu capacity is 2
	I0408 11:14:24.117882  362928 node_conditions.go:105] duration metric: took 3.71222ms to run NodePressure ...
	I0408 11:14:24.117898  362928 start.go:240] waiting for startup goroutines ...
	I0408 11:14:24.247706  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:24.295033  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:24.307198  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:24.749005  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:24.795538  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:24.807492  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:25.246411  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:25.297507  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:25.307915  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:25.746110  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:25.804130  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:25.814616  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:26.246655  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:26.294264  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:26.309445  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:26.747294  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:26.793913  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:26.806811  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:27.248275  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:27.295424  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:27.306635  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:27.746738  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:27.795296  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:27.805913  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:28.246412  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:28.294776  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:28.306442  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:28.942340  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:28.942942  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:28.945053  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:29.247291  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:29.294899  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:29.306592  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:29.751319  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:29.794679  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:29.805175  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:30.248738  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:30.295447  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:30.311930  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:30.747530  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:30.794756  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:30.806093  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:31.248423  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:31.297436  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:31.306997  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:31.746806  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:31.795271  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:31.805234  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:32.250811  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:32.295057  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:32.304655  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:32.746802  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:32.794446  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:32.805067  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:33.248581  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:33.297017  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:33.305673  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:33.746959  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:33.795284  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:33.822395  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:34.246045  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:34.295437  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:34.305357  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:34.762724  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:34.794277  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:34.805405  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:35.246540  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:35.294988  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:35.305774  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:35.747569  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:35.795781  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:35.805498  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:36.247728  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:36.295013  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:36.305217  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:36.746544  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:36.795654  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:36.806400  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:37.246189  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:37.295960  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:37.305519  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:37.746743  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:37.795367  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:37.805632  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:38.246921  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:38.295292  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:38.305326  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:38.750681  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:38.794894  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:38.805763  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:39.247384  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:39.294635  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:39.305874  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:39.754513  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:39.794741  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:39.805603  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:40.247561  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:40.295967  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:40.304637  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:40.756381  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:40.811631  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:40.811818  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:41.247095  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:41.298126  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:41.306350  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:41.746726  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:41.798984  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:41.804511  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:42.246502  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:42.295127  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:42.307651  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:42.765277  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:42.805876  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:42.826380  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:43.246477  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:43.294691  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:43.309107  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:43.745717  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:43.794978  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:43.805727  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:44.247774  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:44.294823  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:44.305852  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:44.747063  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:44.801135  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:44.837691  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:45.249690  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:45.294733  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:45.305443  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:45.748816  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:45.798169  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:45.805511  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:46.246416  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:46.294938  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:46.305635  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:46.746830  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:46.794637  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:46.806390  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:47.246804  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:47.299414  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:47.312974  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:47.747803  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:47.801637  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:47.813571  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:48.246877  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:48.295421  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:48.305359  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:48.746676  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:48.795155  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:48.810564  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:49.246858  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:49.297183  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:49.314898  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:49.746883  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:49.796380  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:49.807046  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:50.248716  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:50.295896  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:50.307715  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:50.747170  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:50.794444  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:50.806510  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:14:51.246760  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:51.296058  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:51.305342  362928 kapi.go:107] duration metric: took 1m2.005139758s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 11:14:51.747079  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:51.794291  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:52.247342  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:52.294118  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:52.746527  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:52.795338  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:53.414319  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:53.423030  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:53.755971  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:53.796516  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:54.247341  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:54.294897  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:54.747669  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:54.795880  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:55.247809  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:55.295082  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:55.747320  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:55.794990  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:56.247770  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:56.295710  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:57.011521  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:57.011672  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:57.247753  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:57.295248  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:57.747011  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:57.795969  362928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:14:58.249397  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:58.295313  362928 kapi.go:107] duration metric: took 1m10.005523096s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 11:14:58.746406  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:59.248701  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:14:59.748064  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:15:00.246523  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:15:00.747776  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:15:01.246628  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:15:02.136691  362928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:15:02.247241  362928 kapi.go:107] duration metric: took 1m11.00453839s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 11:15:02.248901  362928 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-400631 cluster.
	I0408 11:15:02.250366  362928 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 11:15:02.251860  362928 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 11:15:02.253354  362928 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, inspektor-gadget, nvidia-device-plugin, yakd, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0408 11:15:02.254628  362928 addons.go:505] duration metric: took 1m24.324423057s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner inspektor-gadget nvidia-device-plugin yakd helm-tiller metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0408 11:15:02.254682  362928 start.go:245] waiting for cluster config update ...
	I0408 11:15:02.254701  362928 start.go:254] writing updated cluster config ...
	I0408 11:15:02.255000  362928 ssh_runner.go:195] Run: rm -f paused
	I0408 11:15:02.309093  362928 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 11:15:02.311033  362928 out.go:177] * Done! kubectl is now configured to use "addons-400631" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	edddc1d7a13b5       beae173ccac6a       1 second ago         Exited              registry-test                            0                   bc8719bdacf10       registry-test
	fd632210c280d       98f6c3b32d565       3 seconds ago        Exited              helm-test                                0                   9327b7b979061       helm-test
	a332f9efd2945       a416a98b71e22       6 seconds ago        Exited              helper-pod                               0                   afc31129bc999       helper-pod-delete-pvc-dbb427bd-2a28-4821-82f4-4fe92ed51900
	b47a4ce38b6d5       ba5dc23f65d4c       9 seconds ago        Exited              busybox                                  0                   e9689efa63b65       test-local-path
	3a2606d583656       a416a98b71e22       15 seconds ago       Exited              helper-pod                               0                   162078613a6e2       helper-pod-create-pvc-dbb427bd-2a28-4821-82f4-4fe92ed51900
	d24f60534f324       e45ec2747dd93       19 seconds ago       Exited              gadget                                   2                   63b84b15d7942       gadget-dg4q6
	81bcb6db486fe       db2fc13d44d50       19 seconds ago       Running             gcp-auth                                 0                   ce9ee54c89100       gcp-auth-7d69788767-x2b9s
	45c0e031bc316       ffcc66479b5ba       23 seconds ago       Running             controller                               0                   cb0fff391c47d       ingress-nginx-controller-65496f9567-hp2v4
	e8b4b0d7768a5       738351fd438f0       30 seconds ago       Running             csi-snapshotter                          0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	01c04687464e8       931dbfd16f87c       31 seconds ago       Running             csi-provisioner                          0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	9a1cc9218d48a       e899260153aed       33 seconds ago       Running             liveness-probe                           0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	7b978c7e4f37f       e255e073c508c       34 seconds ago       Running             hostpath                                 0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	0abd7135e3f0d       88ef14a257f42       35 seconds ago       Running             node-driver-registrar                    0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	5ddb0ae5712b1       b29d748098e32       37 seconds ago       Exited              patch                                    0                   33ba09a00cb31       gcp-auth-certs-patch-bnsxp
	879bf3940b9f6       b29d748098e32       37 seconds ago       Exited              create                                   0                   048847d939fd3       gcp-auth-certs-create-vbsft
	f742cc6b7bd01       59cbb42146a37       37 seconds ago       Running             csi-attacher                             0                   f28de44629c82       csi-hostpath-attacher-0
	cceb17d102723       a1ed5895ba635       38 seconds ago       Running             csi-external-health-monitor-controller   0                   f38bf813fcab0       csi-hostpathplugin-cq8xv
	f66c474b725c3       19a639eda60f0       40 seconds ago       Running             csi-resizer                              0                   1e14b2afb95e9       csi-hostpath-resizer-0
	69fd64ad94e99       b29d748098e32       41 seconds ago       Exited              patch                                    1                   1a14f2bd2d77e       ingress-nginx-admission-patch-cbxvc
	df5f906b50017       b29d748098e32       41 seconds ago       Exited              create                                   0                   7988335de0ee1       ingress-nginx-admission-create-78smk
	5eabc1a51cd54       e16d1e3a10667       43 seconds ago       Running             local-path-provisioner                   0                   500ace387bfbc       local-path-provisioner-78b46b4d5c-xnjmd
	60d1fd7f32c63       aa61ee9c70bc4       46 seconds ago       Running             volume-snapshot-controller               0                   1bf59c6a01e67       snapshot-controller-58dbcc7b99-s8mjc
	4a0210f42ce0f       aa61ee9c70bc4       46 seconds ago       Running             volume-snapshot-controller               0                   4288ebb6f9b1c       snapshot-controller-58dbcc7b99-q9fvl
	3b91417fe5973       31de47c733c91       47 seconds ago       Running             yakd                                     0                   8aa53bc15f985       yakd-dashboard-9947fc6bf-shdpp
	1eb6354d9f73b       a24c7c057ec87       58 seconds ago       Running             metrics-server                           0                   be74f7a4d52ac       metrics-server-75d6c48ddd-h8wgl
	bba55d186d09a       38c5e506fa551       About a minute ago   Running             registry-proxy                           0                   e3a69f8e1f004       registry-proxy-58x4t
	6347efe432fe2       9363667f8aecb       About a minute ago   Running             registry                                 0                   5409613ce4417       registry-8dz4c
	4bfbede92a0e0       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   3061c84ad6f42       kube-ingress-dns-minikube
	57ad7c7e0a07e       3f39089e90831       About a minute ago   Running             tiller                                   0                   33979ba5f93b7       tiller-deploy-7b677967b9-8rp29
	524c885330fe3       1a9bd6f561b5c       About a minute ago   Running             cloud-spanner-emulator                   0                   ea30c8773c2a3       cloud-spanner-emulator-5446596998-b44tp
	bcea5aba35a74       f6df8d4b582f4       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   8d2d406c850c2       nvidia-device-plugin-daemonset-gg2h7
	aafd1302df069       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   47c3c0982b41a       storage-provisioner
	efcf6cf768615       cbb01a7bd410d       About a minute ago   Running             coredns                                  0                   5eb5463169a9d       coredns-76f75df574-77sc7
	54e7d6328605d       a1d263b5dc5b0       About a minute ago   Running             kube-proxy                               0                   5d9cc9579c545       kube-proxy-s6r2k
	bc27a68545798       8c390d98f50c0       2 minutes ago        Running             kube-scheduler                           0                   fdecded06ba3c       kube-scheduler-addons-400631
	bce1890f11d6d       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   61d447dd5b813       etcd-addons-400631
	b31175f1ddb43       6052a25da3f97       2 minutes ago        Running             kube-controller-manager                  0                   8f2e00e81cc79       kube-controller-manager-addons-400631
	f7482b68936ee       39f995c9f1996       2 minutes ago        Running             kube-apiserver                           0                   4ce6c2e697d03       kube-apiserver-addons-400631
	
	
	==> containerd <==
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.121137067Z" level=info msg="StopPodSandbox for \"9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.121270347Z" level=info msg="Container to stop \"fd632210c280dab91cc1c29e6eb17d26c1c935c8c798ae1c12501d855c7e498a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.166331364Z" level=info msg="shim disconnected" id=9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835 namespace=k8s.io
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.166474526Z" level=warning msg="cleaning up after shim disconnected" id=9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835 namespace=k8s.io
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.166609980Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.258224974Z" level=info msg="TearDown network for sandbox \"9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835\" successfully"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.258309175Z" level=info msg="StopPodSandbox for \"9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835\" returns successfully"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.441608347Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.443162629Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:latest: active requests=0, bytes read=775567"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.446890356Z" level=info msg="ImageCreate event name:\"sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.450490614Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.451447878Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:latest\" with image id \"sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\", repo tag \"gcr.io/k8s-minikube/busybox:latest\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b\", size \"775541\" in 2.194299589s"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.451597052Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:latest\" returns image reference \"sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.456776402Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.461353456Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.463419244Z" level=info msg="CreateContainer within sandbox \"bc8719bdacf10524d6a77d4f235b231a8941ac69ea06ef7c8462564bc675afa1\" for container &ContainerMetadata{Name:registry-test,Attempt:0,}"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.489903906Z" level=info msg="CreateContainer within sandbox \"bc8719bdacf10524d6a77d4f235b231a8941ac69ea06ef7c8462564bc675afa1\" for &ContainerMetadata{Name:registry-test,Attempt:0,} returns container id \"edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.492263548Z" level=info msg="StartContainer for \"edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8\""
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.582950058Z" level=info msg="StartContainer for \"edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8\" returns successfully"
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.647339649Z" level=info msg="shim disconnected" id=edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8 namespace=k8s.io
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.647442944Z" level=warning msg="cleaning up after shim disconnected" id=edddc1d7a13b55e520c7aecca00989bc30d2166d9306041a76bd61ae4f9880c8 namespace=k8s.io
	Apr 08 11:15:19 addons-400631 containerd[653]: time="2024-04-08T11:15:19.647460907Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 08 11:15:20 addons-400631 containerd[653]: time="2024-04-08T11:15:20.160487660Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 08 11:15:20 addons-400631 containerd[653]: time="2024-04-08T11:15:20.266394864Z" level=info msg="StopContainer for \"1eb6354d9f73b52d807e863c261f8c2e0b47cd751049a9aa2092ffd988ca3d0d\" with timeout 30 (s)"
	Apr 08 11:15:20 addons-400631 containerd[653]: time="2024-04-08T11:15:20.267121373Z" level=info msg="Stop container \"1eb6354d9f73b52d807e863c261f8c2e0b47cd751049a9aa2092ffd988ca3d0d\" with signal terminated"
	
	
	==> coredns [efcf6cf768615635a6c88fdeb6ef8e95c0a6ed3a4383d2f698f574475d498144] <==
	[INFO] 10.244.0.8:43264 - 47138 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098625s
	[INFO] 10.244.0.8:41936 - 42081 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059444s
	[INFO] 10.244.0.8:41936 - 60771 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062261s
	[INFO] 10.244.0.8:46076 - 54578 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072428s
	[INFO] 10.244.0.8:46076 - 62000 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111394s
	[INFO] 10.244.0.8:50694 - 18585 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096973s
	[INFO] 10.244.0.8:50694 - 64664 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085399s
	[INFO] 10.244.0.8:35899 - 60597 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091289s
	[INFO] 10.244.0.8:35899 - 40887 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000057814s
	[INFO] 10.244.0.8:56277 - 46372 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070935s
	[INFO] 10.244.0.8:56277 - 47654 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000162508s
	[INFO] 10.244.0.8:58238 - 30527 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062974s
	[INFO] 10.244.0.8:58238 - 15677 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00013833s
	[INFO] 10.244.0.8:35573 - 6898 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000030543s
	[INFO] 10.244.0.8:35573 - 36851 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000029195s
	[INFO] 10.244.0.22:41317 - 29137 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00028346s
	[INFO] 10.244.0.22:38761 - 63578 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000101511s
	[INFO] 10.244.0.22:57085 - 7495 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093168s
	[INFO] 10.244.0.22:52058 - 52286 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057237s
	[INFO] 10.244.0.22:60610 - 10749 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075259s
	[INFO] 10.244.0.22:58807 - 3531 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007826s
	[INFO] 10.244.0.22:33959 - 1506 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000630679s
	[INFO] 10.244.0.22:50770 - 55366 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001103897s
	[INFO] 10.244.0.26:34210 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000722044s
	[INFO] 10.244.0.26:39166 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112859s
	
	
	==> describe nodes <==
	Name:               addons-400631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-400631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=addons-400631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_13_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-400631
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-400631"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:13:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-400631
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:15:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:14:56 +0000   Mon, 08 Apr 2024 11:13:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:14:56 +0000   Mon, 08 Apr 2024 11:13:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:14:56 +0000   Mon, 08 Apr 2024 11:13:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:14:56 +0000   Mon, 08 Apr 2024 11:13:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    addons-400631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aa7a316c3ff4e5da6bed1bffa71a5f0
	  System UUID:                1aa7a316-c3ff-4e5d-a6be-d1bffa71a5f0
	  Boot ID:                    3dc77814-eaf3-48c4-907a-c3cd1fd37afd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-b44tp      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  default                     registry-test                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  gadget                      gadget-dg4q6                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  gcp-auth                    gcp-auth-7d69788767-x2b9s                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  ingress-nginx               ingress-nginx-controller-65496f9567-hp2v4    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         94s
	  kube-system                 coredns-76f75df574-77sc7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     104s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpathplugin-cq8xv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 etcd-addons-400631                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-addons-400631                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-addons-400631        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-s6r2k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-addons-400631                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 metrics-server-75d6c48ddd-h8wgl              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                 nvidia-device-plugin-daemonset-gg2h7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 registry-8dz4c                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 registry-proxy-58x4t                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 snapshot-controller-58dbcc7b99-q9fvl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 snapshot-controller-58dbcc7b99-s8mjc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 tiller-deploy-7b677967b9-8rp29               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  local-path-storage          local-path-provisioner-78b46b4d5c-xnjmd      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-shdpp               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 102s  kube-proxy       
	  Normal  Starting                 117s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s  kubelet          Node addons-400631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s  kubelet          Node addons-400631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s  kubelet          Node addons-400631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                116s  kubelet          Node addons-400631 status is now: NodeReady
	  Normal  RegisteredNode           105s  node-controller  Node addons-400631 event: Registered Node addons-400631 in Controller
	
	
	==> dmesg <==
	[  +5.666119] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.059285] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.668400] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.361386] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +0.065842] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.222333] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.075344] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.334468] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +0.167890] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.048322] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.021879] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.285985] kauditd_printk_skb: 107 callbacks suppressed
	[Apr 8 11:14] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.722307] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.375721] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.402386] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.318944] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.105130] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.098378] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.165333] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.026337] kauditd_printk_skb: 20 callbacks suppressed
	[Apr 8 11:15] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.891434] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.036795] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.030029] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [bce1890f11d6deb61adca61032e95093a381e051d22709362c9e13bd2aa5223e] <==
	{"level":"warn","ts":"2024-04-08T11:14:28.927054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.890159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85603"}
	{"level":"info","ts":"2024-04-08T11:14:28.92708Z","caller":"traceutil/trace.go:171","msg":"trace[1866924910] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:955; }","duration":"134.948879ms","start":"2024-04-08T11:14:28.792124Z","end":"2024-04-08T11:14:28.927073Z","steps":["trace[1866924910] 'agreement among raft nodes before linearized reading'  (duration: 134.403248ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:44.183874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.887736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" ","response":"range_response_count:1 size:1742"}
	{"level":"info","ts":"2024-04-08T11:14:44.183958Z","caller":"traceutil/trace.go:171","msg":"trace[2046275187] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1041; }","duration":"104.982434ms","start":"2024-04-08T11:14:44.078961Z","end":"2024-04-08T11:14:44.183944Z","steps":["trace[2046275187] 'range keys from in-memory index tree'  (duration: 104.839962ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:53.391815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.84495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.102\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-08T11:14:53.395446Z","caller":"traceutil/trace.go:171","msg":"trace[25084807] range","detail":"{range_begin:/registry/masterleases/192.168.39.102; range_end:; response_count:1; response_revision:1113; }","duration":"218.638277ms","start":"2024-04-08T11:14:53.176792Z","end":"2024-04-08T11:14:53.39543Z","steps":["trace[25084807] 'range keys from in-memory index tree'  (duration: 214.722735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:53.392191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.690757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-08T11:14:53.39587Z","caller":"traceutil/trace.go:171","msg":"trace[769619995] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1113; }","duration":"160.396956ms","start":"2024-04-08T11:14:53.235461Z","end":"2024-04-08T11:14:53.395858Z","steps":["trace[769619995] 'range keys from in-memory index tree'  (duration: 156.422689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:53.392269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.797151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-04-08T11:14:53.396104Z","caller":"traceutil/trace.go:171","msg":"trace[1299654515] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1113; }","duration":"114.648885ms","start":"2024-04-08T11:14:53.281446Z","end":"2024-04-08T11:14:53.396095Z","steps":["trace[1299654515] 'range keys from in-memory index tree'  (duration: 109.345907ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:14:56.982521Z","caller":"traceutil/trace.go:171","msg":"trace[308865837] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1151; }","duration":"249.026292ms","start":"2024-04-08T11:14:56.733471Z","end":"2024-04-08T11:14:56.982498Z","steps":["trace[308865837] 'read index received'  (duration: 248.826238ms)","trace[308865837] 'applied index is now lower than readState.Index'  (duration: 199.576µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T11:14:56.983119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.091478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-04-08T11:14:56.983186Z","caller":"traceutil/trace.go:171","msg":"trace[263836007] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1119; }","duration":"202.186805ms","start":"2024-04-08T11:14:56.78099Z","end":"2024-04-08T11:14:56.983176Z","steps":["trace[263836007] 'agreement among raft nodes before linearized reading'  (duration: 202.044994ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:56.983553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.033243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-04-08T11:14:56.983607Z","caller":"traceutil/trace.go:171","msg":"trace[463176477] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1119; }","duration":"102.104478ms","start":"2024-04-08T11:14:56.881493Z","end":"2024-04-08T11:14:56.983597Z","steps":["trace[463176477] 'agreement among raft nodes before linearized reading'  (duration: 101.97783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:14:56.983614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.129246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-08T11:14:56.985576Z","caller":"traceutil/trace.go:171","msg":"trace[1576540548] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1119; }","duration":"252.092137ms","start":"2024-04-08T11:14:56.733447Z","end":"2024-04-08T11:14:56.985539Z","steps":["trace[1576540548] 'agreement among raft nodes before linearized reading'  (duration: 249.488874ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:14:56.988233Z","caller":"traceutil/trace.go:171","msg":"trace[1362115405] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"278.406021ms","start":"2024-04-08T11:14:56.709817Z","end":"2024-04-08T11:14:56.988223Z","steps":["trace[1362115405] 'process raft request'  (duration: 272.573996ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:15:02.120613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.680304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-08T11:15:02.120761Z","caller":"traceutil/trace.go:171","msg":"trace[24520232] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1152; }","duration":"386.86105ms","start":"2024-04-08T11:15:01.733888Z","end":"2024-04-08T11:15:02.12075Z","steps":["trace[24520232] 'range keys from in-memory index tree'  (duration: 386.576066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:15:02.120795Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:15:01.733873Z","time spent":"386.911923ms","remote":"127.0.0.1:41248","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11509,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-08T11:15:02.12092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.937046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/yakd-dashboard/\" range_end:\"/registry/secrets/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T11:15:02.120935Z","caller":"traceutil/trace.go:171","msg":"trace[763016225] range","detail":"{range_begin:/registry/secrets/yakd-dashboard/; range_end:/registry/secrets/yakd-dashboard0; response_count:0; response_revision:1152; }","duration":"264.951463ms","start":"2024-04-08T11:15:01.855978Z","end":"2024-04-08T11:15:02.120929Z","steps":["trace[763016225] 'range keys from in-memory index tree'  (duration: 264.89789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:15:02.121129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.626784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-04-08T11:15:02.121145Z","caller":"traceutil/trace.go:171","msg":"trace[491588142] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1152; }","duration":"208.663722ms","start":"2024-04-08T11:15:01.912475Z","end":"2024-04-08T11:15:02.121139Z","steps":["trace[491588142] 'count revisions from in-memory index tree'  (duration: 208.581359ms)"],"step_count":1}
	
	
	==> gcp-auth [81bcb6db486feac1c8f85de3da5964508996e0e6ea577f34ef1e3c33a9205aa0] <==
	2024/04/08 11:15:01 GCP Auth Webhook started!
	2024/04/08 11:15:02 Ready to marshal response ...
	2024/04/08 11:15:02 Ready to write response ...
	2024/04/08 11:15:02 Ready to marshal response ...
	2024/04/08 11:15:02 Ready to write response ...
	2024/04/08 11:15:13 Ready to marshal response ...
	2024/04/08 11:15:13 Ready to write response ...
	2024/04/08 11:15:13 Ready to marshal response ...
	2024/04/08 11:15:13 Ready to write response ...
	2024/04/08 11:15:14 Ready to marshal response ...
	2024/04/08 11:15:14 Ready to write response ...
	2024/04/08 11:15:14 Ready to marshal response ...
	2024/04/08 11:15:14 Ready to write response ...
	
	
	==> kernel <==
	 11:15:21 up 2 min,  0 users,  load average: 2.74, 1.39, 0.54
	Linux addons-400631 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7482b68936ee84476cae89c3eafd497340c3d903107c10d638b5ab138447332] <==
	W0408 11:13:45.483818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 11:13:45.483856       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 11:13:45.485505       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0408 11:13:46.093870       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0408 11:13:46.500941       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:13:46.500995       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:13:47.020859       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:13:47.020948       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:13:47.094555       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:13:47.094635       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:13:47.747368       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.104.34.178"}
	I0408 11:13:47.839321       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.99.168.202"}
	I0408 11:13:47.942336       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0408 11:13:48.929576       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.100.57.112"}
	I0408 11:13:48.962625       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0408 11:13:49.123617       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.220.141"}
	I0408 11:13:50.981568       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.230.182"}
	E0408 11:14:23.583948       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.18.10:443: connect: connection refused
	W0408 11:14:23.584276       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 11:14:23.584475       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0408 11:14:23.585959       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.18.10:443: connect: connection refused
	E0408 11:14:23.590191       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.18.10:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.18.10:443: connect: connection refused
	I0408 11:14:23.674846       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [b31175f1ddb4319a039c02f8e4eeae38ce230b88eb7fd99c244077d72f2520b5] <==
	I0408 11:14:47.310261       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0408 11:14:47.317975       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0408 11:14:47.320115       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0408 11:14:47.349874       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0408 11:14:47.369984       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0408 11:14:47.376947       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0408 11:14:47.378464       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0408 11:14:49.729375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="9.062312ms"
	I0408 11:14:49.730352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="79.528µs"
	I0408 11:14:58.030530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="56.416µs"
	I0408 11:15:02.174105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="14.095335ms"
	I0408 11:15:02.174839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="636.294µs"
	I0408 11:15:02.503596       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0408 11:15:02.542407       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 11:15:02.666641       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 11:15:06.602251       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 11:15:06.602319       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 11:15:11.938028       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="15.836104ms"
	I0408 11:15:11.940026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="50.019µs"
	I0408 11:15:13.958136       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0408 11:15:17.023126       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0408 11:15:17.028471       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0408 11:15:17.137975       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0408 11:15:17.140885       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0408 11:15:20.243173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-75d6c48ddd" duration="6.184µs"
	
	
	==> kube-proxy [54e7d6328605ddf78548c049c922b274e9a270687642154da30eb71f6cc37e64] <==
	I0408 11:13:38.622792       1 server_others.go:72] "Using iptables proxy"
	I0408 11:13:38.651105       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0408 11:13:38.754091       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:13:38.754112       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:13:38.754124       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:13:38.775982       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:13:38.776271       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:13:38.776284       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:13:38.779975       1 config.go:188] "Starting service config controller"
	I0408 11:13:38.780024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:13:38.780044       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:13:38.780048       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:13:38.780099       1 config.go:315] "Starting node config controller"
	I0408 11:13:38.780112       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:13:38.881478       1 shared_informer.go:318] Caches are synced for node config
	I0408 11:13:38.881577       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:13:38.881603       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bc27a68545798606b3509b6cef8e7660ca38b3c30c470898e4bfaecd5b3d3e87] <==
	W0408 11:13:21.679754       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:13:21.679915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:13:21.683911       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 11:13:21.683960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 11:13:21.684019       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:13:21.684058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:13:21.684114       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:13:21.684122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:13:21.684164       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:13:21.684201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:13:21.684239       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 11:13:21.684275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 11:13:21.688883       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:13:21.690273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:13:21.690394       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 11:13:21.690771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 11:13:21.690468       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:13:21.691265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 11:13:21.690249       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 11:13:21.691608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 11:13:22.503523       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:13:22.503998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 11:13:22.595609       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:13:22.595665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0408 11:13:23.262580       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 11:15:16 addons-400631 kubelet[1242]: I0408 11:15:16.584619    1242 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g8gsb\" (UniqueName: \"kubernetes.io/projected/87df0390-47fc-4adb-a33b-61d94be0d654-kube-api-access-g8gsb\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:16 addons-400631 kubelet[1242]: I0408 11:15:16.584739    1242 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/87df0390-47fc-4adb-a33b-61d94be0d654-data\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:16 addons-400631 kubelet[1242]: I0408 11:15:16.584753    1242 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/87df0390-47fc-4adb-a33b-61d94be0d654-gcp-creds\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:16 addons-400631 kubelet[1242]: I0408 11:15:16.584762    1242 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/87df0390-47fc-4adb-a33b-61d94be0d654-script\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:17 addons-400631 kubelet[1242]: I0408 11:15:17.112058    1242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc31129bc9992547cffb65c8189c932b144ddf2e8911dca335c476b4586db78"
	Apr 08 11:15:18 addons-400631 kubelet[1242]: I0408 11:15:18.958022    1242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d4d306-f501-4e93-9f8e-05368a5f0e44" path="/var/lib/kubelet/pods/49d4d306-f501-4e93-9f8e-05368a5f0e44/volumes"
	Apr 08 11:15:18 addons-400631 kubelet[1242]: I0408 11:15:18.959007    1242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e647f5f-4abc-48d0-b664-1166c6d414ce" path="/var/lib/kubelet/pods/7e647f5f-4abc-48d0-b664-1166c6d414ce/volumes"
	Apr 08 11:15:18 addons-400631 kubelet[1242]: I0408 11:15:18.959467    1242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87df0390-47fc-4adb-a33b-61d94be0d654" path="/var/lib/kubelet/pods/87df0390-47fc-4adb-a33b-61d94be0d654/volumes"
	Apr 08 11:15:19 addons-400631 kubelet[1242]: I0408 11:15:19.417783    1242 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhmv\" (UniqueName: \"kubernetes.io/projected/f7fbd9f6-99e1-481a-90d2-58d9899f7933-kube-api-access-fqhmv\") pod \"f7fbd9f6-99e1-481a-90d2-58d9899f7933\" (UID: \"f7fbd9f6-99e1-481a-90d2-58d9899f7933\") "
	Apr 08 11:15:19 addons-400631 kubelet[1242]: I0408 11:15:19.420922    1242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fbd9f6-99e1-481a-90d2-58d9899f7933-kube-api-access-fqhmv" (OuterVolumeSpecName: "kube-api-access-fqhmv") pod "f7fbd9f6-99e1-481a-90d2-58d9899f7933" (UID: "f7fbd9f6-99e1-481a-90d2-58d9899f7933"). InnerVolumeSpecName "kube-api-access-fqhmv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 11:15:19 addons-400631 kubelet[1242]: I0408 11:15:19.518321    1242 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fqhmv\" (UniqueName: \"kubernetes.io/projected/f7fbd9f6-99e1-481a-90d2-58d9899f7933-kube-api-access-fqhmv\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:20 addons-400631 kubelet[1242]: I0408 11:15:20.134561    1242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9327b7b979061013035d0740ae4790a78fdb02c1275f064e79e238f0745ce835"
	Apr 08 11:15:20 addons-400631 kubelet[1242]: I0408 11:15:20.954402    1242 scope.go:117] "RemoveContainer" containerID="d24f60534f32410ca646e8f1a8cb195c2b663e83daf13a4b4962c1100bc2231e"
	Apr 08 11:15:20 addons-400631 kubelet[1242]: E0408 11:15:20.955020    1242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 20s restarting failed container=gadget pod=gadget-dg4q6_gadget(fa31b009-9dc2-4984-9b81-ca9a28cb6d1c)\"" pod="gadget/gadget-dg4q6" podUID="fa31b009-9dc2-4984-9b81-ca9a28cb6d1c"
	Apr 08 11:15:20 addons-400631 kubelet[1242]: I0408 11:15:20.961019    1242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fbd9f6-99e1-481a-90d2-58d9899f7933" path="/var/lib/kubelet/pods/f7fbd9f6-99e1-481a-90d2-58d9899f7933/volumes"
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.439326    1242 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqqlw\" (UniqueName: \"kubernetes.io/projected/aba293a9-546d-4e6c-9432-abe283674660-kube-api-access-bqqlw\") pod \"aba293a9-546d-4e6c-9432-abe283674660\" (UID: \"aba293a9-546d-4e6c-9432-abe283674660\") "
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.439373    1242 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aba293a9-546d-4e6c-9432-abe283674660-gcp-creds\") pod \"aba293a9-546d-4e6c-9432-abe283674660\" (UID: \"aba293a9-546d-4e6c-9432-abe283674660\") "
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.439985    1242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba293a9-546d-4e6c-9432-abe283674660-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "aba293a9-546d-4e6c-9432-abe283674660" (UID: "aba293a9-546d-4e6c-9432-abe283674660"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.440150    1242 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aba293a9-546d-4e6c-9432-abe283674660-gcp-creds\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.445200    1242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba293a9-546d-4e6c-9432-abe283674660-kube-api-access-bqqlw" (OuterVolumeSpecName: "kube-api-access-bqqlw") pod "aba293a9-546d-4e6c-9432-abe283674660" (UID: "aba293a9-546d-4e6c-9432-abe283674660"). InnerVolumeSpecName "kube-api-access-bqqlw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.540984    1242 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqqlw\" (UniqueName: \"kubernetes.io/projected/aba293a9-546d-4e6c-9432-abe283674660-kube-api-access-bqqlw\") on node \"addons-400631\" DevicePath \"\""
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.944864    1242 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc5fh\" (UniqueName: \"kubernetes.io/projected/329b40d1-d2e7-45b7-a96d-a64185bed172-kube-api-access-jc5fh\") pod \"329b40d1-d2e7-45b7-a96d-a64185bed172\" (UID: \"329b40d1-d2e7-45b7-a96d-a64185bed172\") "
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.944986    1242 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/329b40d1-d2e7-45b7-a96d-a64185bed172-tmp-dir\") pod \"329b40d1-d2e7-45b7-a96d-a64185bed172\" (UID: \"329b40d1-d2e7-45b7-a96d-a64185bed172\") "
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.945585    1242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329b40d1-d2e7-45b7-a96d-a64185bed172-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "329b40d1-d2e7-45b7-a96d-a64185bed172" (UID: "329b40d1-d2e7-45b7-a96d-a64185bed172"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 08 11:15:21 addons-400631 kubelet[1242]: I0408 11:15:21.952295    1242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329b40d1-d2e7-45b7-a96d-a64185bed172-kube-api-access-jc5fh" (OuterVolumeSpecName: "kube-api-access-jc5fh") pod "329b40d1-d2e7-45b7-a96d-a64185bed172" (UID: "329b40d1-d2e7-45b7-a96d-a64185bed172"). InnerVolumeSpecName "kube-api-access-jc5fh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [aafd1302df06928bdb2af2c7a7ab3db8a6df506b870ea34048abcb3179572435] <==
	I0408 11:13:45.486453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 11:13:45.727428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 11:13:45.727564       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 11:13:45.816147       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 11:13:45.816382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-400631_ebb66e3e-0d3d-4e7c-b133-a1bdb00d9a49!
	I0408 11:13:45.819134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f24c110-ce4e-49fa-936c-f31839293c3d", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-400631_ebb66e3e-0d3d-4e7c-b133-a1bdb00d9a49 became leader
	I0408 11:13:45.918777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-400631_ebb66e3e-0d3d-4e7c-b133-a1bdb00d9a49!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-400631 -n addons-400631
helpers_test.go:261: (dbg) Run:  kubectl --context addons-400631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-78smk ingress-nginx-admission-patch-cbxvc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/HelmTiller]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-400631 describe pod ingress-nginx-admission-create-78smk ingress-nginx-admission-patch-cbxvc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-400631 describe pod ingress-nginx-admission-create-78smk ingress-nginx-admission-patch-cbxvc: exit status 1 (60.093478ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-78smk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cbxvc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-400631 describe pod ingress-nginx-admission-create-78smk ingress-nginx-admission-patch-cbxvc: exit status 1
--- FAIL: TestAddons/parallel/HelmTiller (14.43s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 14.46
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.0/json-events 44.52
22 TestDownloadOnly/v1.30.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 152.47
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 142.68
38 TestAddons/parallel/Registry 20.26
39 TestAddons/parallel/Ingress 22.71
40 TestAddons/parallel/InspektorGadget 12.04
41 TestAddons/parallel/MetricsServer 5.95
44 TestAddons/parallel/CSI 59.34
45 TestAddons/parallel/Headlamp 14.19
46 TestAddons/parallel/CloudSpanner 5.7
47 TestAddons/parallel/LocalPath 12.28
48 TestAddons/parallel/NvidiaDevicePlugin 5.82
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.77
54 TestCertOptions 58.8
55 TestCertExpiration 301.09
57 TestForceSystemdFlag 72.34
58 TestForceSystemdEnv 73.64
60 TestKVMDriverInstallOrUpdate 5.04
64 TestErrorSpam/setup 43.78
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.59
68 TestErrorSpam/unpause 1.63
69 TestErrorSpam/stop 5.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 60.73
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 44.95
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
81 TestFunctional/serial/CacheCmd/cache/add_local 2.5
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 41.47
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.49
92 TestFunctional/serial/LogsFileCmd 1.43
93 TestFunctional/serial/InvalidService 4.58
95 TestFunctional/parallel/ConfigCmd 0.38
96 TestFunctional/parallel/DashboardCmd 11.88
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.83
103 TestFunctional/parallel/ServiceCmdConnect 11.6
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 40.76
107 TestFunctional/parallel/SSHCmd 0.41
108 TestFunctional/parallel/CpCmd 1.31
109 TestFunctional/parallel/MySQL 30.63
110 TestFunctional/parallel/FileSync 0.23
111 TestFunctional/parallel/CertSync 1.31
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
119 TestFunctional/parallel/License 0.62
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.81
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.55
130 TestFunctional/parallel/ImageCommands/Setup 2.42
131 TestFunctional/parallel/MountCmd/any-port 19.8
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.23
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.02
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.62
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.32
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.38
138 TestFunctional/parallel/MountCmd/specific-port 1.9
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.95
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
141 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
143 TestFunctional/parallel/ProfileCmd/profile_list 0.3
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
145 TestFunctional/parallel/ServiceCmd/List 0.48
155 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
157 TestFunctional/parallel/ServiceCmd/Format 0.34
158 TestFunctional/parallel/ServiceCmd/URL 0.39
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 274.29
166 TestMultiControlPlane/serial/DeployApp 6.1
167 TestMultiControlPlane/serial/PingHostFromPods 1.41
168 TestMultiControlPlane/serial/AddWorkerNode 46.11
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 13.69
172 TestMultiControlPlane/serial/StopSecondaryNode 93.14
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
174 TestMultiControlPlane/serial/RestartSecondaryNode 42.2
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.57
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 436.8
177 TestMultiControlPlane/serial/DeleteSecondaryNode 8.07
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.41
179 TestMultiControlPlane/serial/StopCluster 276.59
180 TestMultiControlPlane/serial/RestartCluster 137.92
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
182 TestMultiControlPlane/serial/AddSecondaryNode 172.91
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 60.25
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.72
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.34
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 94.66
219 TestMountStart/serial/StartWithMountFirst 28.7
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 33.64
222 TestMountStart/serial/VerifyMountSecond 0.46
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.39
226 TestMountStart/serial/RestartStopped 23.63
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 101.78
231 TestMultiNode/serial/DeployApp2Nodes 5.66
232 TestMultiNode/serial/PingHostFrom2Pods 0.93
233 TestMultiNode/serial/AddNode 43.35
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.81
237 TestMultiNode/serial/StopNode 2.42
238 TestMultiNode/serial/StartAfterStop 26.17
239 TestMultiNode/serial/RestartKeepsNodes 299.69
240 TestMultiNode/serial/DeleteNode 2.21
241 TestMultiNode/serial/StopMultiNode 184.24
242 TestMultiNode/serial/RestartMultiNode 81.5
243 TestMultiNode/serial/ValidateNameConflict 51.15
248 TestPreload 394.06
250 TestScheduledStopUnix 115.85
254 TestRunningBinaryUpgrade 198.35
256 TestKubernetesUpgrade 216.34
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 124.63
268 TestNetworkPlugins/group/false 4.01
272 TestNoKubernetes/serial/StartWithStopK8s 19.05
273 TestNoKubernetes/serial/Start 27.08
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
275 TestNoKubernetes/serial/ProfileList 0.88
276 TestNoKubernetes/serial/Stop 1.49
277 TestNoKubernetes/serial/StartNoArgs 44.27
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
279 TestStoppedBinaryUpgrade/Setup 2.54
280 TestStoppedBinaryUpgrade/Upgrade 140.07
289 TestPause/serial/Start 63.58
290 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
291 TestNetworkPlugins/group/auto/Start 111.93
292 TestNetworkPlugins/group/flannel/Start 115.09
293 TestNetworkPlugins/group/enable-default-cni/Start 153.41
294 TestPause/serial/SecondStartNoReconfiguration 83.82
295 TestNetworkPlugins/group/auto/KubeletFlags 0.24
296 TestNetworkPlugins/group/auto/NetCatPod 9.29
297 TestNetworkPlugins/group/flannel/ControllerPod 6.01
298 TestNetworkPlugins/group/auto/DNS 0.2
299 TestNetworkPlugins/group/auto/Localhost 0.17
300 TestNetworkPlugins/group/auto/HairPin 0.15
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
302 TestNetworkPlugins/group/flannel/NetCatPod 10.25
303 TestPause/serial/Pause 0.89
304 TestPause/serial/VerifyStatus 0.28
305 TestPause/serial/Unpause 0.74
306 TestPause/serial/PauseAgain 1
307 TestPause/serial/DeletePaused 0.88
308 TestPause/serial/VerifyDeletedResources 0.64
309 TestNetworkPlugins/group/flannel/DNS 0.19
310 TestNetworkPlugins/group/flannel/Localhost 0.16
311 TestNetworkPlugins/group/flannel/HairPin 0.17
312 TestNetworkPlugins/group/bridge/Start 65.89
313 TestNetworkPlugins/group/calico/Start 116.39
314 TestNetworkPlugins/group/kindnet/Start 106.37
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
320 TestNetworkPlugins/group/custom-flannel/Start 123.02
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
322 TestNetworkPlugins/group/bridge/NetCatPod 9.42
323 TestNetworkPlugins/group/bridge/DNS 0.17
324 TestNetworkPlugins/group/bridge/Localhost 0.14
325 TestNetworkPlugins/group/bridge/HairPin 0.13
327 TestStartStop/group/old-k8s-version/serial/FirstStart 205.65
328 TestNetworkPlugins/group/calico/ControllerPod 6.01
329 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
330 TestNetworkPlugins/group/calico/KubeletFlags 0.27
331 TestNetworkPlugins/group/calico/NetCatPod 11.36
332 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
333 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
334 TestNetworkPlugins/group/calico/DNS 0.25
335 TestNetworkPlugins/group/calico/Localhost 0.14
336 TestNetworkPlugins/group/calico/HairPin 0.14
337 TestNetworkPlugins/group/kindnet/DNS 0.18
338 TestNetworkPlugins/group/kindnet/Localhost 0.16
339 TestNetworkPlugins/group/kindnet/HairPin 0.16
341 TestStartStop/group/no-preload/serial/FirstStart 178.8
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.01
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
346 TestNetworkPlugins/group/custom-flannel/DNS 0.24
347 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
348 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
350 TestStartStop/group/newest-cni/serial/FirstStart 71.95
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.03
354 TestStartStop/group/newest-cni/serial/DeployApp 0
355 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.31
356 TestStartStop/group/newest-cni/serial/Stop 2.48
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
358 TestStartStop/group/newest-cni/serial/SecondStart 37.24
359 TestStartStop/group/old-k8s-version/serial/DeployApp 10.48
360 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
361 TestStartStop/group/old-k8s-version/serial/Stop 92.48
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/newest-cni/serial/Pause 2.52
367 TestStartStop/group/embed-certs/serial/FirstStart 60.97
368 TestStartStop/group/no-preload/serial/DeployApp 10.3
369 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
370 TestStartStop/group/no-preload/serial/Stop 92.52
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.22
373 TestStartStop/group/embed-certs/serial/DeployApp 9.34
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
375 TestStartStop/group/embed-certs/serial/Stop 92.54
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
377 TestStartStop/group/old-k8s-version/serial/SecondStart 206.32
378 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/no-preload/serial/SecondStart 312.85
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
381 TestStartStop/group/embed-certs/serial/SecondStart 385.25
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
385 TestStartStop/group/old-k8s-version/serial/Pause 2.81
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
393 TestStartStop/group/no-preload/serial/Pause 2.65
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
397 TestStartStop/group/embed-certs/serial/Pause 2.68
x
+
TestDownloadOnly/v1.20.0/json-events (24.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480610 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480610 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (24.633448044s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480610
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480610: exit status 85 (75.372949ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |          |
	|         | -p download-only-480610        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:11:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:11:13.945810  362037 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:11:13.945953  362037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:13.945966  362037 out.go:304] Setting ErrFile to fd 2...
	I0408 11:11:13.945973  362037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:13.946188  362037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	W0408 11:11:13.946318  362037 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18588-354699/.minikube/config/config.json: open /home/jenkins/minikube-integration/18588-354699/.minikube/config/config.json: no such file or directory
	I0408 11:11:13.946923  362037 out.go:298] Setting JSON to true
	I0408 11:11:13.947880  362037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3217,"bootTime":1712571457,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:11:13.947945  362037 start.go:139] virtualization: kvm guest
	I0408 11:11:13.950437  362037 out.go:97] [download-only-480610] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:11:13.951963  362037 out.go:169] MINIKUBE_LOCATION=18588
	W0408 11:11:13.950558  362037 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 11:11:13.950647  362037 notify.go:220] Checking for updates...
	I0408 11:11:13.953393  362037 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:11:13.954643  362037 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:11:13.955810  362037 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:11:13.957073  362037 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:11:13.959237  362037 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:11:13.959464  362037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:11:13.990299  362037 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:11:13.990320  362037 start.go:297] selected driver: kvm2
	I0408 11:11:13.990325  362037 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:11:13.990647  362037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:13.990764  362037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-354699/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:11:14.005548  362037 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:11:14.005600  362037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:11:14.006072  362037 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:11:14.006211  362037 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:11:14.006275  362037 cni.go:84] Creating CNI manager for ""
	I0408 11:11:14.006290  362037 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:11:14.006297  362037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:11:14.006341  362037 start.go:340] cluster config:
	{Name:download-only-480610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-480610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:11:14.006493  362037 iso.go:125] acquiring lock: {Name:mk9795a25e82a211f5efea96f359ae93d962e2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:14.008151  362037 out.go:97] Downloading VM boot image ...
	I0408 11:11:14.008189  362037 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18588-354699/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:11:23.620433  362037 out.go:97] Starting "download-only-480610" primary control-plane node in "download-only-480610" cluster
	I0408 11:11:23.620477  362037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 11:11:23.736364  362037 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0408 11:11:23.736412  362037 cache.go:56] Caching tarball of preloaded images
	I0408 11:11:23.736586  362037 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 11:11:23.738374  362037 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 11:11:23.738398  362037 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0408 11:11:23.847411  362037 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-480610 host does not exist
	  To start a cluster, run: "minikube start -p download-only-480610"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-480610
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (14.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-867752 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-867752 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (14.455303986s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (14.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-867752
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-867752: exit status 85 (75.335626ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-480610        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| delete  | -p download-only-480610        | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| start   | -o=json --download-only        | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-867752        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:11:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:11:38.919220  362243 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:11:38.919406  362243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:38.919413  362243 out.go:304] Setting ErrFile to fd 2...
	I0408 11:11:38.919419  362243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:38.920013  362243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:11:38.920670  362243 out.go:298] Setting JSON to true
	I0408 11:11:38.921573  362243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3242,"bootTime":1712571457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:11:38.921642  362243 start.go:139] virtualization: kvm guest
	I0408 11:11:38.924001  362243 out.go:97] [download-only-867752] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:11:38.925560  362243 out.go:169] MINIKUBE_LOCATION=18588
	I0408 11:11:38.924166  362243 notify.go:220] Checking for updates...
	I0408 11:11:38.928464  362243 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:11:38.930020  362243 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:11:38.931485  362243 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:11:38.932801  362243 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:11:38.935117  362243 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:11:38.935386  362243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:11:38.967963  362243 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:11:38.967982  362243 start.go:297] selected driver: kvm2
	I0408 11:11:38.967989  362243 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:11:38.968291  362243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:38.968392  362243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-354699/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:11:38.983102  362243 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:11:38.983146  362243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:11:38.983576  362243 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:11:38.983706  362243 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:11:38.983767  362243 cni.go:84] Creating CNI manager for ""
	I0408 11:11:38.983780  362243 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:11:38.983787  362243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:11:38.983836  362243 start.go:340] cluster config:
	{Name:download-only-867752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-867752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:11:38.983918  362243 iso.go:125] acquiring lock: {Name:mk9795a25e82a211f5efea96f359ae93d962e2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:38.985466  362243 out.go:97] Starting "download-only-867752" primary control-plane node in "download-only-867752" cluster
	I0408 11:11:38.985486  362243 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 11:11:39.487185  362243 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0408 11:11:39.487336  362243 cache.go:56] Caching tarball of preloaded images
	I0408 11:11:39.487545  362243 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 11:11:39.489545  362243 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0408 11:11:39.489581  362243 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0408 11:11:39.601689  362243 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:dcad3363f354722395d68e96a1f5de54 -> /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-867752 host does not exist
	  To start a cluster, run: "minikube start -p download-only-867752"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-867752
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/json-events (44.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-025600 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-025600 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (44.521394996s)
--- PASS: TestDownloadOnly/v1.30.0-rc.0/json-events (44.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-025600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-025600: exit status 85 (78.344009ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-480610           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| delete  | -p download-only-480610           | download-only-480610 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| start   | -o=json --download-only           | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-867752           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| delete  | -p download-only-867752           | download-only-867752 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC | 08 Apr 24 11:11 UTC |
	| start   | -o=json --download-only           | download-only-025600 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:11 UTC |                     |
	|         | -p download-only-025600           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:11:53
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:11:53.728137  362428 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:11:53.728248  362428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:53.728257  362428 out.go:304] Setting ErrFile to fd 2...
	I0408 11:11:53.728261  362428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:11:53.728463  362428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:11:53.729087  362428 out.go:298] Setting JSON to true
	I0408 11:11:53.730121  362428 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3257,"bootTime":1712571457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:11:53.730194  362428 start.go:139] virtualization: kvm guest
	I0408 11:11:53.732314  362428 out.go:97] [download-only-025600] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:11:53.734009  362428 out.go:169] MINIKUBE_LOCATION=18588
	I0408 11:11:53.732538  362428 notify.go:220] Checking for updates...
	I0408 11:11:53.736551  362428 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:11:53.737751  362428 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:11:53.738952  362428 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:11:53.740072  362428 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:11:53.742389  362428 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:11:53.742578  362428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:11:53.774406  362428 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:11:53.774432  362428 start.go:297] selected driver: kvm2
	I0408 11:11:53.774438  362428 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:11:53.774767  362428 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:53.774878  362428 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-354699/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:11:53.789824  362428 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:11:53.789872  362428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:11:53.790299  362428 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:11:53.790435  362428 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:11:53.790494  362428 cni.go:84] Creating CNI manager for ""
	I0408 11:11:53.790507  362428 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 11:11:53.790516  362428 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:11:53.790567  362428 start.go:340] cluster config:
	{Name:download-only-025600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:download-only-025600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0408 11:11:53.790661  362428 iso.go:125] acquiring lock: {Name:mk9795a25e82a211f5efea96f359ae93d962e2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:11:53.792397  362428 out.go:97] Starting "download-only-025600" primary control-plane node in "download-only-025600" cluster
	I0408 11:11:53.792426  362428 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime containerd
	I0408 11:11:53.935393  362428 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0408 11:11:53.935435  362428 cache.go:56] Caching tarball of preloaded images
	I0408 11:11:53.935605  362428 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime containerd
	I0408 11:11:53.937342  362428 out.go:97] Downloading Kubernetes v1.30.0-rc.0 preload ...
	I0408 11:11:53.937358  362428 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0408 11:11:54.044871  362428 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:aabac05a443031c1f7e77f50d04c43a6 -> /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0408 11:12:04.678418  362428 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0408 11:12:04.678549  362428 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18588-354699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0408 11:12:05.424424  362428 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on containerd
	I0408 11:12:05.424871  362428 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/download-only-025600/config.json ...
	I0408 11:12:05.424910  362428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/download-only-025600/config.json: {Name:mk96842fd213efc2380e8822b9b141b3e7cf62ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:12:05.425102  362428 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime containerd
	I0408 11:12:05.425278  362428 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18588-354699/.minikube/cache/linux/amd64/v1.30.0-rc.0/kubectl
	
	
	* The control-plane node download-only-025600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-025600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-025600
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-192648 --alsologtostderr --binary-mirror http://127.0.0.1:33245 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-192648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-192648
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (152.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-385972 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-385972 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m31.401103774s)
helpers_test.go:175: Cleaning up "offline-containerd-385972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-385972
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-385972: (1.071828862s)
--- PASS: TestOffline (152.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-400631
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-400631: exit status 85 (67.902302ms)

                                                
                                                
-- stdout --
	* Profile "addons-400631" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-400631"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-400631
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-400631: exit status 85 (62.559888ms)

                                                
                                                
-- stdout --
	* Profile "addons-400631" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-400631"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-400631 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-400631 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.675890443s)
--- PASS: TestAddons/Setup (142.68s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 30.640522ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8dz4c" [128f4451-1f0a-4fd0-909f-96eb66c6de4c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005697244s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-58x4t" [8f5c6ca1-70a0-4f77-889d-7215671cbab3] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007012454s
addons_test.go:340: (dbg) Run:  kubectl --context addons-400631 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-400631 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-400631 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.068485987s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 ip
2024/04/08 11:15:21 [DEBUG] GET http://192.168.39.102:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.26s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-400631 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-400631 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-400631 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dbb665a7-35ed-48c8-b3fe-21be372512ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dbb665a7-35ed-48c8-b3fe-21be372512ae] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004011647s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-400631 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.102
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-400631 addons disable ingress-dns --alsologtostderr -v=1: (1.575959411s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-400631 addons disable ingress --alsologtostderr -v=1: (7.898074574s)
--- PASS: TestAddons/parallel/Ingress (22.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dg4q6" [fa31b009-9dc2-4984-9b81-ca9a28cb6d1c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004758495s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-400631
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-400631: (6.036380984s)
--- PASS: TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.92744ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-h8wgl" [329b40d1-d2e7-45b7-a96d-a64185bed172] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008895718s
addons_test.go:415: (dbg) Run:  kubectl --context addons-400631 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.971773ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-400631 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-400631 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f5b7fded-7c9e-4d1e-92df-00525ae782d5] Pending
helpers_test.go:344: "task-pv-pod" [f5b7fded-7c9e-4d1e-92df-00525ae782d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f5b7fded-7c9e-4d1e-92df-00525ae782d5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00612655s
addons_test.go:584: (dbg) Run:  kubectl --context addons-400631 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-400631 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-400631 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-400631 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-400631 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-400631 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-400631 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [719d9a1f-efd5-4c6b-912d-f752087d861c] Pending
helpers_test.go:344: "task-pv-pod-restore" [719d9a1f-efd5-4c6b-912d-f752087d861c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [719d9a1f-efd5-4c6b-912d-f752087d861c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00392089s
addons_test.go:626: (dbg) Run:  kubectl --context addons-400631 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-400631 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-400631 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-400631 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.787446987s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-400631 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-400631 --alsologtostderr -v=1: (1.186196337s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-pf9rx" [89222ceb-3cab-497c-8737-46351c71a217] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-pf9rx" [89222ceb-3cab-497c-8737-46351c71a217] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-pf9rx" [89222ceb-3cab-497c-8737-46351c71a217] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004648947s
--- PASS: TestAddons/parallel/Headlamp (14.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-b44tp" [0b3a29f3-7be7-462c-bec1-19964a524b7d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014249847s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-400631
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-400631 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-400631 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4672e75c-f036-4f92-8c5a-42c0ec852095] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4672e75c-f036-4f92-8c5a-42c0ec852095] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4672e75c-f036-4f92-8c5a-42c0ec852095] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.011828199s
addons_test.go:891: (dbg) Run:  kubectl --context addons-400631 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 ssh "cat /opt/local-path-provisioner/pvc-dbb427bd-2a28-4821-82f4-4fe92ed51900_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-400631 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-400631 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-400631 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gg2h7" [8391958b-ee3e-47ec-a464-3008401c9c38] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009044642s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-400631
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-shdpp" [052a0984-fee2-4cc8-a05c-150c6a034c0c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004854202s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-400631 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-400631 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-400631
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-400631: (1m32.448784571s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-400631
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-400631
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-400631
--- PASS: TestAddons/StoppedEnableDisable (92.77s)

                                                
                                    
x
+
TestCertOptions (58.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-561268 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-561268 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (57.270693927s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-561268 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-561268 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-561268 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-561268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-561268
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-561268: (1.021240791s)
--- PASS: TestCertOptions (58.80s)

                                                
                                    
x
+
TestCertExpiration (301.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-474417 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-474417 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (46.138271418s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-474417 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-474417 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (1m12.712149676s)
helpers_test.go:175: Cleaning up "cert-expiration-474417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-474417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-474417: (2.239187096s)
--- PASS: TestCertExpiration (301.09s)

                                                
                                    
x
+
TestForceSystemdFlag (72.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682610 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682610 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m11.335332563s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682610 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-682610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682610
--- PASS: TestForceSystemdFlag (72.34s)

                                                
                                    
x
+
TestForceSystemdEnv (73.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-793227 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-793227 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m12.640199364s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-793227 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-793227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-793227
--- PASS: TestForceSystemdEnv (73.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.04s)

                                                
                                    
x
+
TestErrorSpam/setup (43.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-089364 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-089364 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-089364 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-089364 --driver=kvm2  --container-runtime=containerd: (43.783819752s)
--- PASS: TestErrorSpam/setup (43.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop: (2.304255715s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop: (1.200932227s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-089364 --log_dir /tmp/nospam-089364 stop: (1.764909593s)
--- PASS: TestErrorSpam/stop (5.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18588-354699/.minikube/files/etc/test/nested/copy/362025/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-059950 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m0.733066814s)
--- PASS: TestFunctional/serial/StartWithProxy (60.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --alsologtostderr -v=8
E0408 11:20:02.329813  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.335678  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.345942  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.366262  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.406517  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.486797  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.647274  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:02.967839  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:03.608895  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:04.889378  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:07.451009  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:12.571570  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:20:22.812478  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-059950 --alsologtostderr -v=8: (44.951182261s)
functional_test.go:659: soft start took 44.951865172s for "functional-059950" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-059950 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:3.1: (1.159118718s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:3.3: (1.18032283s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 cache add registry.k8s.io/pause:latest: (1.114792962s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-059950 /tmp/TestFunctionalserialCacheCmdcacheadd_local303718157/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache add minikube-local-cache-test:functional-059950
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 cache add minikube-local-cache-test:functional-059950: (2.122178578s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache delete minikube-local-cache-test:functional-059950
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-059950
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.360138ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 cache reload: (1.149495513s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 kubectl -- --context functional-059950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-059950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 11:20:43.293248  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-059950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.466605754s)
functional_test.go:757: restart took 41.466715691s for "functional-059950" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-059950 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 logs
E0408 11:21:24.254371  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 logs: (1.493308746s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 logs --file /tmp/TestFunctionalserialLogsFileCmd3375797374/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 logs --file /tmp/TestFunctionalserialLogsFileCmd3375797374/001/logs.txt: (1.424405145s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-059950 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-059950
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-059950: exit status 115 (312.709568ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.54:30201 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-059950 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-059950 delete -f testdata/invalidsvc.yaml: (1.063738649s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 config get cpus: exit status 14 (59.561617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 config get cpus: exit status 14 (60.411728ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059950 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059950 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 370795: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (142.191308ms)

                                                
                                                
-- stdout --
	* [functional-059950] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:22:04.961800  370231 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:22:04.961960  370231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:22:04.961972  370231 out.go:304] Setting ErrFile to fd 2...
	I0408 11:22:04.961978  370231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:22:04.962791  370231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:22:04.964137  370231 out.go:298] Setting JSON to false
	I0408 11:22:04.965145  370231 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3868,"bootTime":1712571457,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:22:04.965231  370231 start.go:139] virtualization: kvm guest
	I0408 11:22:04.967263  370231 out.go:177] * [functional-059950] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:22:04.969038  370231 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:22:04.969034  370231 notify.go:220] Checking for updates...
	I0408 11:22:04.970297  370231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:22:04.971427  370231 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:22:04.972614  370231 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:22:04.973782  370231 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:22:04.975034  370231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:22:04.976601  370231 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:22:04.977159  370231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:22:04.977207  370231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:04.992650  370231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0408 11:22:04.993114  370231 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:04.993737  370231 main.go:141] libmachine: Using API Version  1
	I0408 11:22:04.993781  370231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:04.994150  370231 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:04.994367  370231 main.go:141] libmachine: (functional-059950) Calling .DriverName
	I0408 11:22:04.994642  370231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:22:04.994971  370231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:22:04.995011  370231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:05.009286  370231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0408 11:22:05.009690  370231 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:05.010160  370231 main.go:141] libmachine: Using API Version  1
	I0408 11:22:05.010179  370231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:05.010492  370231 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:05.010710  370231 main.go:141] libmachine: (functional-059950) Calling .DriverName
	I0408 11:22:05.040923  370231 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 11:22:05.042291  370231 start.go:297] selected driver: kvm2
	I0408 11:22:05.042303  370231 start.go:901] validating driver "kvm2" against &{Name:functional-059950 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-059950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:22:05.042401  370231 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:22:05.044292  370231 out.go:177] 
	W0408 11:22:05.045393  370231 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 11:22:05.046552  370231 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (154.948538ms)

                                                
                                                
-- stdout --
	* [functional-059950] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:22:03.978005  370113 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:22:03.978111  370113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:22:03.978115  370113 out.go:304] Setting ErrFile to fd 2...
	I0408 11:22:03.978120  370113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:22:03.978443  370113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:22:03.979074  370113 out.go:298] Setting JSON to false
	I0408 11:22:03.980053  370113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3867,"bootTime":1712571457,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:22:03.980124  370113 start.go:139] virtualization: kvm guest
	I0408 11:22:03.982068  370113 out.go:177] * [functional-059950] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0408 11:22:03.983814  370113 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:22:03.985000  370113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:22:03.983885  370113 notify.go:220] Checking for updates...
	I0408 11:22:03.987246  370113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 11:22:03.988358  370113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 11:22:03.989424  370113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:22:03.990623  370113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:22:03.992308  370113 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:22:03.992737  370113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:22:03.992789  370113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:04.010967  370113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0408 11:22:04.011448  370113 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:04.012077  370113 main.go:141] libmachine: Using API Version  1
	I0408 11:22:04.012102  370113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:04.012457  370113 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:04.012687  370113 main.go:141] libmachine: (functional-059950) Calling .DriverName
	I0408 11:22:04.012947  370113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:22:04.013243  370113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:22:04.013284  370113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:04.031243  370113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0408 11:22:04.031633  370113 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:04.032100  370113 main.go:141] libmachine: Using API Version  1
	I0408 11:22:04.032123  370113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:04.032501  370113 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:04.032652  370113 main.go:141] libmachine: (functional-059950) Calling .DriverName
	I0408 11:22:04.063340  370113 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0408 11:22:04.064703  370113 start.go:297] selected driver: kvm2
	I0408 11:22:04.064714  370113 start.go:901] validating driver "kvm2" against &{Name:functional-059950 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-059950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:22:04.064808  370113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:22:04.066646  370113 out.go:177] 
	W0408 11:22:04.067882  370113 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 11:22:04.069100  370113 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-059950 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-059950 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-np4qv" [7ac8b030-155a-47ad-a692-6cdd7c654156] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-np4qv" [7ac8b030-155a-47ad-a692-6cdd7c654156] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.008209109s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.54:31438
functional_test.go:1671: http://192.168.39.54:31438: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-np4qv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.54:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.54:31438
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [587fd07a-46a5-45a5-9605-45bf7c2d77d7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006233315s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-059950 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-059950 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-059950 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-059950 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06e8df0b-add0-417b-8dd4-59e368b240a7] Pending
helpers_test.go:344: "sp-pod" [06e8df0b-add0-417b-8dd4-59e368b240a7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06e8df0b-add0-417b-8dd4-59e368b240a7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0039607s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-059950 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-059950 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-059950 delete -f testdata/storage-provisioner/pod.yaml: (1.590250969s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a3438646-d484-440f-b1df-4b0f5d56d07e] Pending
helpers_test.go:344: "sp-pod" [a3438646-d484-440f-b1df-4b0f5d56d07e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a3438646-d484-440f-b1df-4b0f5d56d07e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003914816s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-059950 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh -n functional-059950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cp functional-059950:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3113573628/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh -n functional-059950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh -n functional-059950 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-059950 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-49pkp" [2b2db554-5fc5-4310-a676-d492d78d37cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-49pkp" [2b2db554-5fc5-4310-a676-d492d78d37cc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004261753s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;": exit status 1 (298.187108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;": exit status 1 (307.094805ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;": exit status 1 (225.328376ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;": exit status 1 (226.343329ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059950 exec mysql-859648c796-49pkp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/362025/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /etc/test/nested/copy/362025/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/362025.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /etc/ssl/certs/362025.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/362025.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /usr/share/ca-certificates/362025.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3620252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /etc/ssl/certs/3620252.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3620252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /usr/share/ca-certificates/3620252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-059950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "sudo systemctl is-active docker": exit status 1 (212.632681ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "sudo systemctl is-active crio": exit status 1 (211.413969ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059950 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-059950
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-059950
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059950 image ls --format short --alsologtostderr:
I0408 11:22:07.315817  370687 out.go:291] Setting OutFile to fd 1 ...
I0408 11:22:07.315983  370687 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.315997  370687 out.go:304] Setting ErrFile to fd 2...
I0408 11:22:07.316004  370687 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.316223  370687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
I0408 11:22:07.317330  370687 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.317551  370687 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.318376  370687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.318607  370687 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.336012  370687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
I0408 11:22:07.336563  370687 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.337147  370687 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.337168  370687 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.337595  370687 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.337804  370687 main.go:141] libmachine: (functional-059950) Calling .GetState
I0408 11:22:07.340092  370687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.340191  370687 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.357861  370687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
I0408 11:22:07.358337  370687 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.358824  370687 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.358855  370687 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.359328  370687 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.359513  370687 main.go:141] libmachine: (functional-059950) Calling .DriverName
I0408 11:22:07.359729  370687 ssh_runner.go:195] Run: systemctl --version
I0408 11:22:07.359760  370687 main.go:141] libmachine: (functional-059950) Calling .GetSSHHostname
I0408 11:22:07.364019  370687 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.364779  370687 main.go:141] libmachine: (functional-059950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:27:b0", ip: ""} in network mk-functional-059950: {Iface:virbr1 ExpiryTime:2024-04-08 12:19:03 +0000 UTC Type:0 Mac:52:54:00:11:27:b0 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-059950 Clientid:01:52:54:00:11:27:b0}
I0408 11:22:07.364868  370687 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined IP address 192.168.39.54 and MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.365194  370687 main.go:141] libmachine: (functional-059950) Calling .GetSSHPort
I0408 11:22:07.365378  370687 main.go:141] libmachine: (functional-059950) Calling .GetSSHKeyPath
I0408 11:22:07.365506  370687 main.go:141] libmachine: (functional-059950) Calling .GetSSHUsername
I0408 11:22:07.365581  370687 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/functional-059950/id_rsa Username:docker}
I0408 11:22:07.461818  370687 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:22:07.565956  370687 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.565971  370687 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.566282  370687 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
I0408 11:22:07.566314  370687 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.566332  370687 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:07.566347  370687 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.566357  370687 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.566626  370687 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.566648  370687 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
I0408 11:22:07.566664  370687 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059950 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-059950  | sha256:72d7ff | 991B   |
| gcr.io/google-containers/addon-resizer      | functional-059950  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059950 image ls --format table --alsologtostderr:
I0408 11:22:07.968476  370827 out.go:291] Setting OutFile to fd 1 ...
I0408 11:22:07.968601  370827 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.968615  370827 out.go:304] Setting ErrFile to fd 2...
I0408 11:22:07.968622  370827 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.968884  370827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
I0408 11:22:07.969581  370827 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.969672  370827 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.970050  370827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.970108  370827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.986054  370827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
I0408 11:22:07.986570  370827 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.987305  370827 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.987334  370827 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.987746  370827 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.987982  370827 main.go:141] libmachine: (functional-059950) Calling .GetState
I0408 11:22:07.989993  370827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.990044  370827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:08.006289  370827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
I0408 11:22:08.006789  370827 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:08.007362  370827 main.go:141] libmachine: Using API Version  1
I0408 11:22:08.007385  370827 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:08.007686  370827 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:08.007888  370827 main.go:141] libmachine: (functional-059950) Calling .DriverName
I0408 11:22:08.008134  370827 ssh_runner.go:195] Run: systemctl --version
I0408 11:22:08.008163  370827 main.go:141] libmachine: (functional-059950) Calling .GetSSHHostname
I0408 11:22:08.011105  370827 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:08.011532  370827 main.go:141] libmachine: (functional-059950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:27:b0", ip: ""} in network mk-functional-059950: {Iface:virbr1 ExpiryTime:2024-04-08 12:19:03 +0000 UTC Type:0 Mac:52:54:00:11:27:b0 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-059950 Clientid:01:52:54:00:11:27:b0}
I0408 11:22:08.011568  370827 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined IP address 192.168.39.54 and MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:08.011678  370827 main.go:141] libmachine: (functional-059950) Calling .GetSSHPort
I0408 11:22:08.011883  370827 main.go:141] libmachine: (functional-059950) Calling .GetSSHKeyPath
I0408 11:22:08.012066  370827 main.go:141] libmachine: (functional-059950) Calling .GetSSHUsername
I0408 11:22:08.012232  370827 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/functional-059950/id_rsa Username:docker}
I0408 11:22:08.146537  370827 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:22:08.240509  370827 main.go:141] libmachine: Making call to close driver server
I0408 11:22:08.240534  370827 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:08.240913  370827 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:08.240938  370827 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:08.240947  370827 main.go:141] libmachine: Making call to close driver server
I0408 11:22:08.240954  370827 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:08.241229  370827 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:08.241250  370827 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:08.241254  370827 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059950 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:72d7ff2d677e100453c23b36a0c0594f0a62745705612768c893edbb48f8aae9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-059950"],"size":"991"},{"id":"sha256:5107333e08a87b8
36d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests"
:[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb3
70407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-059950"],"size":"10823156"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be
12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059950 image ls --format json --alsologtostderr:
I0408 11:22:07.667814  370741 out.go:291] Setting OutFile to fd 1 ...
I0408 11:22:07.667967  370741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.667980  370741 out.go:304] Setting ErrFile to fd 2...
I0408 11:22:07.667986  370741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.668302  370741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
I0408 11:22:07.669129  370741 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.669276  370741 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.669825  370741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.669899  370741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.686807  370741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43685
I0408 11:22:07.687333  370741 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.687933  370741 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.687950  370741 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.688422  370741 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.688758  370741 main.go:141] libmachine: (functional-059950) Calling .GetState
I0408 11:22:07.690937  370741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.690983  370741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.706358  370741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
I0408 11:22:07.706712  370741 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.707307  370741 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.707347  370741 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.707649  370741 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.707905  370741 main.go:141] libmachine: (functional-059950) Calling .DriverName
I0408 11:22:07.708071  370741 ssh_runner.go:195] Run: systemctl --version
I0408 11:22:07.708092  370741 main.go:141] libmachine: (functional-059950) Calling .GetSSHHostname
I0408 11:22:07.710785  370741 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.711221  370741 main.go:141] libmachine: (functional-059950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:27:b0", ip: ""} in network mk-functional-059950: {Iface:virbr1 ExpiryTime:2024-04-08 12:19:03 +0000 UTC Type:0 Mac:52:54:00:11:27:b0 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-059950 Clientid:01:52:54:00:11:27:b0}
I0408 11:22:07.711243  370741 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined IP address 192.168.39.54 and MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.711519  370741 main.go:141] libmachine: (functional-059950) Calling .GetSSHPort
I0408 11:22:07.711704  370741 main.go:141] libmachine: (functional-059950) Calling .GetSSHKeyPath
I0408 11:22:07.711890  370741 main.go:141] libmachine: (functional-059950) Calling .GetSSHUsername
I0408 11:22:07.712034  370741 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/functional-059950/id_rsa Username:docker}
I0408 11:22:07.812231  370741 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:22:07.900200  370741 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.900216  370741 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.900461  370741 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
I0408 11:22:07.900504  370741 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.900517  370741 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:07.900540  370741 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.900552  370741 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.900811  370741 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.900868  370741 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:07.900887  370741 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059950 image ls --format yaml --alsologtostderr:
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-059950
size: "10823156"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:72d7ff2d677e100453c23b36a0c0594f0a62745705612768c893edbb48f8aae9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-059950
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "70534964"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059950 image ls --format yaml --alsologtostderr:
I0408 11:22:07.316077  370688 out.go:291] Setting OutFile to fd 1 ...
I0408 11:22:07.316242  370688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.316250  370688 out.go:304] Setting ErrFile to fd 2...
I0408 11:22:07.316258  370688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.316528  370688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
I0408 11:22:07.317664  370688 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.317838  370688 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.319376  370688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.319456  370688 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.340300  370688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
I0408 11:22:07.340789  370688 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.341439  370688 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.341460  370688 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.341837  370688 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.342057  370688 main.go:141] libmachine: (functional-059950) Calling .GetState
I0408 11:22:07.344124  370688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.344171  370688 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.358817  370688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35371
I0408 11:22:07.359305  370688 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.359771  370688 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.359793  370688 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.360236  370688 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.360446  370688 main.go:141] libmachine: (functional-059950) Calling .DriverName
I0408 11:22:07.360670  370688 ssh_runner.go:195] Run: systemctl --version
I0408 11:22:07.360697  370688 main.go:141] libmachine: (functional-059950) Calling .GetSSHHostname
I0408 11:22:07.363807  370688 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.364411  370688 main.go:141] libmachine: (functional-059950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:27:b0", ip: ""} in network mk-functional-059950: {Iface:virbr1 ExpiryTime:2024-04-08 12:19:03 +0000 UTC Type:0 Mac:52:54:00:11:27:b0 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-059950 Clientid:01:52:54:00:11:27:b0}
I0408 11:22:07.364440  370688 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined IP address 192.168.39.54 and MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.364555  370688 main.go:141] libmachine: (functional-059950) Calling .GetSSHPort
I0408 11:22:07.364739  370688 main.go:141] libmachine: (functional-059950) Calling .GetSSHKeyPath
I0408 11:22:07.364877  370688 main.go:141] libmachine: (functional-059950) Calling .GetSSHUsername
I0408 11:22:07.365041  370688 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/functional-059950/id_rsa Username:docker}
I0408 11:22:07.471323  370688 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:22:07.585118  370688 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.585135  370688 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.585479  370688 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.585518  370688 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:07.585536  370688 main.go:141] libmachine: Making call to close driver server
I0408 11:22:07.585549  370688 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:07.587307  370688 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
I0408 11:22:07.587344  370688 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:07.587362  370688 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh pgrep buildkitd: exit status 1 (221.160648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image build -t localhost/my-image:functional-059950 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image build -t localhost/my-image:functional-059950 testdata/build --alsologtostderr: (4.090209166s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059950 image build -t localhost/my-image:functional-059950 testdata/build --alsologtostderr:
I0408 11:22:07.867248  370784 out.go:291] Setting OutFile to fd 1 ...
I0408 11:22:07.867386  370784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.867397  370784 out.go:304] Setting ErrFile to fd 2...
I0408 11:22:07.867403  370784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:22:07.867675  370784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
I0408 11:22:07.868531  370784 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.869304  370784 config.go:182] Loaded profile config "functional-059950": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 11:22:07.869863  370784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.869958  370784 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.886956  370784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
I0408 11:22:07.887522  370784 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.888208  370784 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.888238  370784 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.888583  370784 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.888929  370784 main.go:141] libmachine: (functional-059950) Calling .GetState
I0408 11:22:07.890734  370784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 11:22:07.890787  370784 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:22:07.906870  370784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33739
I0408 11:22:07.907256  370784 main.go:141] libmachine: () Calling .GetVersion
I0408 11:22:07.907733  370784 main.go:141] libmachine: Using API Version  1
I0408 11:22:07.907761  370784 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:22:07.908139  370784 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:22:07.908317  370784 main.go:141] libmachine: (functional-059950) Calling .DriverName
I0408 11:22:07.908539  370784 ssh_runner.go:195] Run: systemctl --version
I0408 11:22:07.908567  370784 main.go:141] libmachine: (functional-059950) Calling .GetSSHHostname
I0408 11:22:07.911569  370784 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.912022  370784 main.go:141] libmachine: (functional-059950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:27:b0", ip: ""} in network mk-functional-059950: {Iface:virbr1 ExpiryTime:2024-04-08 12:19:03 +0000 UTC Type:0 Mac:52:54:00:11:27:b0 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-059950 Clientid:01:52:54:00:11:27:b0}
I0408 11:22:07.912059  370784 main.go:141] libmachine: (functional-059950) DBG | domain functional-059950 has defined IP address 192.168.39.54 and MAC address 52:54:00:11:27:b0 in network mk-functional-059950
I0408 11:22:07.912255  370784 main.go:141] libmachine: (functional-059950) Calling .GetSSHPort
I0408 11:22:07.912950  370784 main.go:141] libmachine: (functional-059950) Calling .GetSSHKeyPath
I0408 11:22:07.913164  370784 main.go:141] libmachine: (functional-059950) Calling .GetSSHUsername
I0408 11:22:07.913365  370784 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/functional-059950/id_rsa Username:docker}
I0408 11:22:08.036833  370784 build_images.go:161] Building image from path: /tmp/build.2912737203.tar
I0408 11:22:08.036907  370784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0408 11:22:08.073824  370784 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2912737203.tar
I0408 11:22:08.080209  370784 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2912737203.tar: stat -c "%s %y" /var/lib/minikube/build/build.2912737203.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2912737203.tar': No such file or directory
I0408 11:22:08.080243  370784 ssh_runner.go:362] scp /tmp/build.2912737203.tar --> /var/lib/minikube/build/build.2912737203.tar (3072 bytes)
I0408 11:22:08.118529  370784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2912737203
I0408 11:22:08.130338  370784 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2912737203 -xf /var/lib/minikube/build/build.2912737203.tar
I0408 11:22:08.147119  370784 containerd.go:394] Building image: /var/lib/minikube/build/build.2912737203
I0408 11:22:08.147174  370784 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2912737203 --local dockerfile=/var/lib/minikube/build/build.2912737203 --output type=image,name=localhost/my-image:functional-059950
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:67474beb4eb5f99bd4a260ab272f07874cf3b74f6fe218c261a7b1d62944c4bf
#8 exporting manifest sha256:67474beb4eb5f99bd4a260ab272f07874cf3b74f6fe218c261a7b1d62944c4bf 0.0s done
#8 exporting config sha256:314357e665edc15b863e06cf48b35e7b8082ee002fdd84898784a587f4f77d5b 0.0s done
#8 naming to localhost/my-image:functional-059950 done
#8 DONE 0.3s
I0408 11:22:11.853369  370784 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2912737203 --local dockerfile=/var/lib/minikube/build/build.2912737203 --output type=image,name=localhost/my-image:functional-059950: (3.706158708s)
I0408 11:22:11.853473  370784 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2912737203
I0408 11:22:11.868038  370784 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2912737203.tar
I0408 11:22:11.879699  370784 build_images.go:217] Built localhost/my-image:functional-059950 from /tmp/build.2912737203.tar
I0408 11:22:11.879748  370784 build_images.go:133] succeeded building to: functional-059950
I0408 11:22:11.879754  370784 build_images.go:134] failed building to: 
I0408 11:22:11.879784  370784 main.go:141] libmachine: Making call to close driver server
I0408 11:22:11.879802  370784 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:11.880125  370784 main.go:141] libmachine: (functional-059950) DBG | Closing plugin on server side
I0408 11:22:11.880143  370784 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:11.880159  370784 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:22:11.880175  370784 main.go:141] libmachine: Making call to close driver server
I0408 11:22:11.880187  370784 main.go:141] libmachine: (functional-059950) Calling .Close
I0408 11:22:11.880435  370784 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:22:11.880450  370784 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
2024/04/08 11:22:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.407759845s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-059950
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdany-port848837738/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712575292521510651" to /tmp/TestFunctionalparallelMountCmdany-port848837738/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712575292521510651" to /tmp/TestFunctionalparallelMountCmdany-port848837738/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712575292521510651" to /tmp/TestFunctionalparallelMountCmdany-port848837738/001/test-1712575292521510651
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.871865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  8 11:21 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  8 11:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  8 11:21 test-1712575292521510651
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh cat /mount-9p/test-1712575292521510651
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-059950 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3d0a6572-558c-45d5-9d8e-acaa50554858] Pending
helpers_test.go:344: "busybox-mount" [3d0a6572-558c-45d5-9d8e-acaa50554858] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3d0a6572-558c-45d5-9d8e-acaa50554858] Running
helpers_test.go:344: "busybox-mount" [3d0a6572-558c-45d5-9d8e-acaa50554858] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3d0a6572-558c-45d5-9d8e-acaa50554858] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004513472s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-059950 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdany-port848837738/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr: (4.978921734s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr: (2.771851362s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.979403852s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-059950
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image load --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr: (4.320848933s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image save gcr.io/google-containers/addon-resizer:functional-059950 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image save gcr.io/google-containers/addon-resizer:functional-059950 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.317830853s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image rm gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.109438217s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdspecific-port133111061/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.939953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdspecific-port133111061/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "sudo umount -f /mount-9p": exit status 1 (359.034419ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-059950 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdspecific-port133111061/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-059950
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 image save --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-059950 image save --daemon gcr.io/google-containers/addon-resizer:functional-059950 --alsologtostderr: (1.89457158s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-059950
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T" /mount1: exit status 1 (424.73043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-059950 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3210955460/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-059950 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-059950 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kh9ht" [f76e93bd-ea13-4605-b382-340cbcf8b3f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kh9ht" [f76e93bd-ea13-4605-b382-340cbcf8b3f6] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005677446s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "236.083899ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "58.832719ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "242.746973ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "67.737337ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service list -o json
functional_test.go:1490: Took "460.301092ms" to run "out/minikube-linux-amd64 -p functional-059950 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.54:31516
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-059950 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.54:31516
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-059950
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-059950
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-059950
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (274.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-870569 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 11:22:46.175008  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:25:02.327076  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:25:30.015954  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:26:30.441609  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.446937  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.457382  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.477729  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.518027  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.598403  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:30.758828  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:31.079473  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:31.720098  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:33.000755  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:35.561626  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:40.682433  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:26:50.923637  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-870569 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m33.584826128s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (274.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-870569 -- rollout status deployment/busybox: (3.651187844s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-bjrg9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-jbbnr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-r4s9k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-bjrg9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-jbbnr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-r4s9k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-bjrg9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-jbbnr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-r4s9k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-bjrg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-bjrg9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-jbbnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-jbbnr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-r4s9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-870569 -- exec busybox-7fdf7869d9-r4s9k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-870569 -v=7 --alsologtostderr
E0408 11:27:11.403860  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-870569 -v=7 --alsologtostderr: (45.23704879s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-870569 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp testdata/cp-test.txt ha-870569:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3311662718/001/cp-test_ha-870569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569:/home/docker/cp-test.txt ha-870569-m02:/home/docker/cp-test_ha-870569_ha-870569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test_ha-870569_ha-870569-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569:/home/docker/cp-test.txt ha-870569-m03:/home/docker/cp-test_ha-870569_ha-870569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test_ha-870569_ha-870569-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569:/home/docker/cp-test.txt ha-870569-m04:/home/docker/cp-test_ha-870569_ha-870569-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test_ha-870569_ha-870569-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp testdata/cp-test.txt ha-870569-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3311662718/001/cp-test_ha-870569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test.txt"
E0408 11:27:52.364340  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m02:/home/docker/cp-test.txt ha-870569:/home/docker/cp-test_ha-870569-m02_ha-870569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test_ha-870569-m02_ha-870569.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m02:/home/docker/cp-test.txt ha-870569-m03:/home/docker/cp-test_ha-870569-m02_ha-870569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test_ha-870569-m02_ha-870569-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m02:/home/docker/cp-test.txt ha-870569-m04:/home/docker/cp-test_ha-870569-m02_ha-870569-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test_ha-870569-m02_ha-870569-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp testdata/cp-test.txt ha-870569-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3311662718/001/cp-test_ha-870569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m03:/home/docker/cp-test.txt ha-870569:/home/docker/cp-test_ha-870569-m03_ha-870569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test_ha-870569-m03_ha-870569.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m03:/home/docker/cp-test.txt ha-870569-m02:/home/docker/cp-test_ha-870569-m03_ha-870569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test_ha-870569-m03_ha-870569-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m03:/home/docker/cp-test.txt ha-870569-m04:/home/docker/cp-test_ha-870569-m03_ha-870569-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test_ha-870569-m03_ha-870569-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp testdata/cp-test.txt ha-870569-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3311662718/001/cp-test_ha-870569-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m04:/home/docker/cp-test.txt ha-870569:/home/docker/cp-test_ha-870569-m04_ha-870569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569 "sudo cat /home/docker/cp-test_ha-870569-m04_ha-870569.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m04:/home/docker/cp-test.txt ha-870569-m02:/home/docker/cp-test_ha-870569-m04_ha-870569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m02 "sudo cat /home/docker/cp-test_ha-870569-m04_ha-870569-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 cp ha-870569-m04:/home/docker/cp-test.txt ha-870569-m03:/home/docker/cp-test_ha-870569-m04_ha-870569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 ssh -n ha-870569-m03 "sudo cat /home/docker/cp-test_ha-870569-m04_ha-870569-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 node stop m02 -v=7 --alsologtostderr
E0408 11:29:14.285476  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-870569 node stop m02 -v=7 --alsologtostderr: (1m32.453950719s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr: exit status 7 (687.682933ms)

                                                
                                                
-- stdout --
	ha-870569
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-870569-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-870569-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-870569-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:29:33.650952  375179 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:29:33.651075  375179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:29:33.651086  375179 out.go:304] Setting ErrFile to fd 2...
	I0408 11:29:33.651091  375179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:29:33.651345  375179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:29:33.651552  375179 out.go:298] Setting JSON to false
	I0408 11:29:33.651581  375179 mustload.go:65] Loading cluster: ha-870569
	I0408 11:29:33.651702  375179 notify.go:220] Checking for updates...
	I0408 11:29:33.652025  375179 config.go:182] Loaded profile config "ha-870569": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:29:33.652048  375179 status.go:255] checking status of ha-870569 ...
	I0408 11:29:33.652475  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.652563  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.673611  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0408 11:29:33.674203  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.675035  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.675079  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.675492  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.675696  375179 main.go:141] libmachine: (ha-870569) Calling .GetState
	I0408 11:29:33.677509  375179 status.go:330] ha-870569 host status = "Running" (err=<nil>)
	I0408 11:29:33.677526  375179 host.go:66] Checking if "ha-870569" exists ...
	I0408 11:29:33.677820  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.677861  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.693173  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0408 11:29:33.693535  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.693987  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.694009  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.694318  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.694505  375179 main.go:141] libmachine: (ha-870569) Calling .GetIP
	I0408 11:29:33.697001  375179 main.go:141] libmachine: (ha-870569) DBG | domain ha-870569 has defined MAC address 52:54:00:4e:5b:ba in network mk-ha-870569
	I0408 11:29:33.697442  375179 main.go:141] libmachine: (ha-870569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5b:ba", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:22:34 +0000 UTC Type:0 Mac:52:54:00:4e:5b:ba Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-870569 Clientid:01:52:54:00:4e:5b:ba}
	I0408 11:29:33.697480  375179 main.go:141] libmachine: (ha-870569) DBG | domain ha-870569 has defined IP address 192.168.39.77 and MAC address 52:54:00:4e:5b:ba in network mk-ha-870569
	I0408 11:29:33.697551  375179 host.go:66] Checking if "ha-870569" exists ...
	I0408 11:29:33.697849  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.697885  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.711896  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0408 11:29:33.712371  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.712873  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.712900  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.713181  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.713358  375179 main.go:141] libmachine: (ha-870569) Calling .DriverName
	I0408 11:29:33.713538  375179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:29:33.713560  375179 main.go:141] libmachine: (ha-870569) Calling .GetSSHHostname
	I0408 11:29:33.716564  375179 main.go:141] libmachine: (ha-870569) DBG | domain ha-870569 has defined MAC address 52:54:00:4e:5b:ba in network mk-ha-870569
	I0408 11:29:33.716937  375179 main.go:141] libmachine: (ha-870569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5b:ba", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:22:34 +0000 UTC Type:0 Mac:52:54:00:4e:5b:ba Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-870569 Clientid:01:52:54:00:4e:5b:ba}
	I0408 11:29:33.716970  375179 main.go:141] libmachine: (ha-870569) DBG | domain ha-870569 has defined IP address 192.168.39.77 and MAC address 52:54:00:4e:5b:ba in network mk-ha-870569
	I0408 11:29:33.717181  375179 main.go:141] libmachine: (ha-870569) Calling .GetSSHPort
	I0408 11:29:33.717354  375179 main.go:141] libmachine: (ha-870569) Calling .GetSSHKeyPath
	I0408 11:29:33.717522  375179 main.go:141] libmachine: (ha-870569) Calling .GetSSHUsername
	I0408 11:29:33.717689  375179 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/ha-870569/id_rsa Username:docker}
	I0408 11:29:33.806312  375179 ssh_runner.go:195] Run: systemctl --version
	I0408 11:29:33.813811  375179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:29:33.831088  375179 kubeconfig.go:125] found "ha-870569" server: "https://192.168.39.254:8443"
	I0408 11:29:33.831122  375179 api_server.go:166] Checking apiserver status ...
	I0408 11:29:33.831157  375179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:29:33.848153  375179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0408 11:29:33.859321  375179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:29:33.859361  375179 ssh_runner.go:195] Run: ls
	I0408 11:29:33.864303  375179 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:29:33.870397  375179 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:29:33.870422  375179 status.go:422] ha-870569 apiserver status = Running (err=<nil>)
	I0408 11:29:33.870435  375179 status.go:257] ha-870569 status: &{Name:ha-870569 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:29:33.870456  375179 status.go:255] checking status of ha-870569-m02 ...
	I0408 11:29:33.870830  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.870885  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.885770  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0408 11:29:33.886216  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.886707  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.886730  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.887035  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.887230  375179 main.go:141] libmachine: (ha-870569-m02) Calling .GetState
	I0408 11:29:33.888882  375179 status.go:330] ha-870569-m02 host status = "Stopped" (err=<nil>)
	I0408 11:29:33.888896  375179 status.go:343] host is not running, skipping remaining checks
	I0408 11:29:33.888902  375179 status.go:257] ha-870569-m02 status: &{Name:ha-870569-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:29:33.888917  375179 status.go:255] checking status of ha-870569-m03 ...
	I0408 11:29:33.889336  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.889393  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.903560  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0408 11:29:33.903938  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.904409  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.904433  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.904757  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.904919  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetState
	I0408 11:29:33.906427  375179 status.go:330] ha-870569-m03 host status = "Running" (err=<nil>)
	I0408 11:29:33.906443  375179 host.go:66] Checking if "ha-870569-m03" exists ...
	I0408 11:29:33.906728  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.906772  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.922476  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0408 11:29:33.922940  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.923411  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.923427  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.923747  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.923921  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetIP
	I0408 11:29:33.926392  375179 main.go:141] libmachine: (ha-870569-m03) DBG | domain ha-870569-m03 has defined MAC address 52:54:00:29:cd:1e in network mk-ha-870569
	I0408 11:29:33.926763  375179 main.go:141] libmachine: (ha-870569-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:cd:1e", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:25:53 +0000 UTC Type:0 Mac:52:54:00:29:cd:1e Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-870569-m03 Clientid:01:52:54:00:29:cd:1e}
	I0408 11:29:33.926785  375179 main.go:141] libmachine: (ha-870569-m03) DBG | domain ha-870569-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:29:cd:1e in network mk-ha-870569
	I0408 11:29:33.926901  375179 host.go:66] Checking if "ha-870569-m03" exists ...
	I0408 11:29:33.927291  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:33.927340  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:33.941563  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0408 11:29:33.941932  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:33.942341  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:33.942358  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:33.942630  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:33.942772  375179 main.go:141] libmachine: (ha-870569-m03) Calling .DriverName
	I0408 11:29:33.942972  375179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:29:33.942993  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetSSHHostname
	I0408 11:29:33.945547  375179 main.go:141] libmachine: (ha-870569-m03) DBG | domain ha-870569-m03 has defined MAC address 52:54:00:29:cd:1e in network mk-ha-870569
	I0408 11:29:33.945992  375179 main.go:141] libmachine: (ha-870569-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:cd:1e", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:25:53 +0000 UTC Type:0 Mac:52:54:00:29:cd:1e Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-870569-m03 Clientid:01:52:54:00:29:cd:1e}
	I0408 11:29:33.946022  375179 main.go:141] libmachine: (ha-870569-m03) DBG | domain ha-870569-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:29:cd:1e in network mk-ha-870569
	I0408 11:29:33.946302  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetSSHPort
	I0408 11:29:33.946488  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetSSHKeyPath
	I0408 11:29:33.946637  375179 main.go:141] libmachine: (ha-870569-m03) Calling .GetSSHUsername
	I0408 11:29:33.946791  375179 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/ha-870569-m03/id_rsa Username:docker}
	I0408 11:29:34.041063  375179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:29:34.069277  375179 kubeconfig.go:125] found "ha-870569" server: "https://192.168.39.254:8443"
	I0408 11:29:34.069311  375179 api_server.go:166] Checking apiserver status ...
	I0408 11:29:34.069358  375179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:29:34.086084  375179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	W0408 11:29:34.097219  375179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:29:34.097286  375179 ssh_runner.go:195] Run: ls
	I0408 11:29:34.102711  375179 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:29:34.107830  375179 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:29:34.107855  375179 status.go:422] ha-870569-m03 apiserver status = Running (err=<nil>)
	I0408 11:29:34.107868  375179 status.go:257] ha-870569-m03 status: &{Name:ha-870569-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:29:34.107896  375179 status.go:255] checking status of ha-870569-m04 ...
	I0408 11:29:34.108218  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:34.108257  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:34.123776  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0408 11:29:34.124183  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:34.124649  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:34.124683  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:34.125060  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:34.125268  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetState
	I0408 11:29:34.126684  375179 status.go:330] ha-870569-m04 host status = "Running" (err=<nil>)
	I0408 11:29:34.126700  375179 host.go:66] Checking if "ha-870569-m04" exists ...
	I0408 11:29:34.127097  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:34.127142  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:34.141497  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0408 11:29:34.141884  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:34.142292  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:34.142320  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:34.142651  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:34.142878  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetIP
	I0408 11:29:34.145504  375179 main.go:141] libmachine: (ha-870569-m04) DBG | domain ha-870569-m04 has defined MAC address 52:54:00:89:c2:ce in network mk-ha-870569
	I0408 11:29:34.146013  375179 main.go:141] libmachine: (ha-870569-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:c2:ce", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:27:17 +0000 UTC Type:0 Mac:52:54:00:89:c2:ce Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-870569-m04 Clientid:01:52:54:00:89:c2:ce}
	I0408 11:29:34.146043  375179 main.go:141] libmachine: (ha-870569-m04) DBG | domain ha-870569-m04 has defined IP address 192.168.39.233 and MAC address 52:54:00:89:c2:ce in network mk-ha-870569
	I0408 11:29:34.146157  375179 host.go:66] Checking if "ha-870569-m04" exists ...
	I0408 11:29:34.146475  375179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:29:34.146516  375179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:29:34.162445  375179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0408 11:29:34.162889  375179 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:29:34.163446  375179 main.go:141] libmachine: Using API Version  1
	I0408 11:29:34.163471  375179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:29:34.163770  375179 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:29:34.163965  375179 main.go:141] libmachine: (ha-870569-m04) Calling .DriverName
	I0408 11:29:34.164165  375179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:29:34.164189  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetSSHHostname
	I0408 11:29:34.167222  375179 main.go:141] libmachine: (ha-870569-m04) DBG | domain ha-870569-m04 has defined MAC address 52:54:00:89:c2:ce in network mk-ha-870569
	I0408 11:29:34.167682  375179 main.go:141] libmachine: (ha-870569-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:c2:ce", ip: ""} in network mk-ha-870569: {Iface:virbr1 ExpiryTime:2024-04-08 12:27:17 +0000 UTC Type:0 Mac:52:54:00:89:c2:ce Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-870569-m04 Clientid:01:52:54:00:89:c2:ce}
	I0408 11:29:34.167711  375179 main.go:141] libmachine: (ha-870569-m04) DBG | domain ha-870569-m04 has defined IP address 192.168.39.233 and MAC address 52:54:00:89:c2:ce in network mk-ha-870569
	I0408 11:29:34.167845  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetSSHPort
	I0408 11:29:34.168037  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetSSHKeyPath
	I0408 11:29:34.168198  375179 main.go:141] libmachine: (ha-870569-m04) Calling .GetSSHUsername
	I0408 11:29:34.168339  375179 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/ha-870569-m04/id_rsa Username:docker}
	I0408 11:29:34.256885  375179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:29:34.276949  375179 status.go:257] ha-870569-m04 status: &{Name:ha-870569-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 node start m02 -v=7 --alsologtostderr
E0408 11:30:02.326172  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-870569 node start m02 -v=7 --alsologtostderr: (41.231898403s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-870569 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-870569 -v=7 --alsologtostderr
E0408 11:31:30.440663  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 11:31:58.126379  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-870569 -v=7 --alsologtostderr: (4m38.794486855s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-870569 --wait=true -v=7 --alsologtostderr
E0408 11:35:02.327144  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:36:25.376517  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:36:30.441407  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-870569 --wait=true -v=7 --alsologtostderr: (2m37.880953683s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-870569
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-870569 node delete m03 -v=7 --alsologtostderr: (7.301677745s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 stop -v=7 --alsologtostderr
E0408 11:40:02.326763  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:41:30.441173  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-870569 stop -v=7 --alsologtostderr: (4m36.465003282s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr: exit status 7 (122.26261ms)

                                                
                                                
-- stdout --
	ha-870569
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-870569-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-870569-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:19.268636  378187 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:19.268831  378187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:19.268840  378187 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:19.268844  378187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:19.269053  378187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:42:19.269248  378187 out.go:298] Setting JSON to false
	I0408 11:42:19.269273  378187 mustload.go:65] Loading cluster: ha-870569
	I0408 11:42:19.269563  378187 notify.go:220] Checking for updates...
	I0408 11:42:19.270462  378187 config.go:182] Loaded profile config "ha-870569": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:42:19.270517  378187 status.go:255] checking status of ha-870569 ...
	I0408 11:42:19.271480  378187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:42:19.271534  378187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:19.294778  378187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0408 11:42:19.295230  378187 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:19.295861  378187 main.go:141] libmachine: Using API Version  1
	I0408 11:42:19.295883  378187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:19.296277  378187 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:19.296507  378187 main.go:141] libmachine: (ha-870569) Calling .GetState
	I0408 11:42:19.298012  378187 status.go:330] ha-870569 host status = "Stopped" (err=<nil>)
	I0408 11:42:19.298028  378187 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:19.298033  378187 status.go:257] ha-870569 status: &{Name:ha-870569 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:19.298051  378187 status.go:255] checking status of ha-870569-m02 ...
	I0408 11:42:19.298308  378187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:42:19.298339  378187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:19.312451  378187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0408 11:42:19.312873  378187 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:19.313318  378187 main.go:141] libmachine: Using API Version  1
	I0408 11:42:19.313342  378187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:19.313676  378187 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:19.313854  378187 main.go:141] libmachine: (ha-870569-m02) Calling .GetState
	I0408 11:42:19.315132  378187 status.go:330] ha-870569-m02 host status = "Stopped" (err=<nil>)
	I0408 11:42:19.315147  378187 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:19.315154  378187 status.go:257] ha-870569-m02 status: &{Name:ha-870569-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:19.315186  378187 status.go:255] checking status of ha-870569-m04 ...
	I0408 11:42:19.315453  378187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:42:19.315492  378187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:19.330285  378187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0408 11:42:19.330638  378187 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:19.331181  378187 main.go:141] libmachine: Using API Version  1
	I0408 11:42:19.331221  378187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:19.331523  378187 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:19.331725  378187 main.go:141] libmachine: (ha-870569-m04) Calling .GetState
	I0408 11:42:19.333238  378187 status.go:330] ha-870569-m04 host status = "Stopped" (err=<nil>)
	I0408 11:42:19.333251  378187 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:19.333257  378187 status.go:257] ha-870569-m04 status: &{Name:ha-870569-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (137.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-870569 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 11:42:53.486973  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-870569 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m17.152538724s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (137.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (172.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-870569 --control-plane -v=7 --alsologtostderr
E0408 11:45:02.326710  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:46:30.440890  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-870569 --control-plane -v=7 --alsologtostderr: (2m51.99536127s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-870569 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (172.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-870282 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-870282 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m0.253073225s)
--- PASS: TestJSONOutput/start/Command (60.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-870282 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-870282 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-870282 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-870282 --output=json --user=testUser: (7.33998604s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-510543 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-510543 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.227711ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05210346-0842-4045-9f2d-77e789b71277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-510543] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66c3b0c2-5885-42ca-a06f-4cad18c49f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18588"}}
	{"specversion":"1.0","id":"711c17f8-13cd-4da4-a347-d8a8fa85b168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d6d0cae0-2109-4bd1-b630-d9f3422db11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig"}}
	{"specversion":"1.0","id":"217ffc79-f13f-4852-8d5e-cce0a9a597f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube"}}
	{"specversion":"1.0","id":"c5f8cab6-bb66-4ae1-b412-ca930da404dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b6e9b7cf-9fad-45bb-9c96-cd309288ced1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e62e4b68-2486-495b-b3f0-ac4152257c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-510543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-510543
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (94.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-859990 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-859990 --driver=kvm2  --container-runtime=containerd: (46.264395302s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-863144 --driver=kvm2  --container-runtime=containerd
E0408 11:50:02.326998  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-863144 --driver=kvm2  --container-runtime=containerd: (45.448436406s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-859990
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-863144
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-863144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-863144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-863144: (1.018154785s)
helpers_test.go:175: Cleaning up "first-859990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-859990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-859990: (1.008211165s)
--- PASS: TestMinikubeProfile (94.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-990042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-990042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.70377105s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-990042 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-990042 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008601 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008601 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (32.640856336s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.46s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-990042 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-008601
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-008601: (1.393870339s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-008601
E0408 11:51:30.441477  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-008601: (22.63320927s)
--- PASS: TestMountStart/serial/RestartStopped (23.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-008601 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026136 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 11:53:05.377676  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026136 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m41.334445532s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-026136 -- rollout status deployment/busybox: (4.019679295s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-cltb6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-ffxpg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-cltb6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-ffxpg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-cltb6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-ffxpg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-cltb6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-cltb6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-ffxpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-026136 -- exec busybox-7fdf7869d9-ffxpg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026136 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-026136 -v 3 --alsologtostderr: (42.757922396s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-026136 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp testdata/cp-test.txt multinode-026136:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2053930168/001/cp-test_multinode-026136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136:/home/docker/cp-test.txt multinode-026136-m02:/home/docker/cp-test_multinode-026136_multinode-026136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test_multinode-026136_multinode-026136-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136:/home/docker/cp-test.txt multinode-026136-m03:/home/docker/cp-test_multinode-026136_multinode-026136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test_multinode-026136_multinode-026136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp testdata/cp-test.txt multinode-026136-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2053930168/001/cp-test_multinode-026136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m02:/home/docker/cp-test.txt multinode-026136:/home/docker/cp-test_multinode-026136-m02_multinode-026136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test_multinode-026136-m02_multinode-026136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m02:/home/docker/cp-test.txt multinode-026136-m03:/home/docker/cp-test_multinode-026136-m02_multinode-026136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test_multinode-026136-m02_multinode-026136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp testdata/cp-test.txt multinode-026136-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2053930168/001/cp-test_multinode-026136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m03:/home/docker/cp-test.txt multinode-026136:/home/docker/cp-test_multinode-026136-m03_multinode-026136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136 "sudo cat /home/docker/cp-test_multinode-026136-m03_multinode-026136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 cp multinode-026136-m03:/home/docker/cp-test.txt multinode-026136-m02:/home/docker/cp-test_multinode-026136-m03_multinode-026136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 ssh -n multinode-026136-m02 "sudo cat /home/docker/cp-test_multinode-026136-m03_multinode-026136-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-026136 node stop m03: (1.528055843s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026136 status: exit status 7 (450.455656ms)

                                                
                                                
-- stdout --
	multinode-026136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr: exit status 7 (444.929872ms)

                                                
                                                
-- stdout --
	multinode-026136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-026136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-026136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:54:31.298516  385291 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:54:31.298651  385291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:54:31.298664  385291 out.go:304] Setting ErrFile to fd 2...
	I0408 11:54:31.298670  385291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:54:31.298915  385291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 11:54:31.299088  385291 out.go:298] Setting JSON to false
	I0408 11:54:31.299113  385291 mustload.go:65] Loading cluster: multinode-026136
	I0408 11:54:31.299216  385291 notify.go:220] Checking for updates...
	I0408 11:54:31.299482  385291 config.go:182] Loaded profile config "multinode-026136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 11:54:31.299496  385291 status.go:255] checking status of multinode-026136 ...
	I0408 11:54:31.299887  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.299956  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.318026  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0408 11:54:31.318544  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.319110  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.319140  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.319602  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.319807  385291 main.go:141] libmachine: (multinode-026136) Calling .GetState
	I0408 11:54:31.321451  385291 status.go:330] multinode-026136 host status = "Running" (err=<nil>)
	I0408 11:54:31.321467  385291 host.go:66] Checking if "multinode-026136" exists ...
	I0408 11:54:31.321768  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.321807  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.338646  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0408 11:54:31.339113  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.339653  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.339676  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.340026  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.340284  385291 main.go:141] libmachine: (multinode-026136) Calling .GetIP
	I0408 11:54:31.343031  385291 main.go:141] libmachine: (multinode-026136) DBG | domain multinode-026136 has defined MAC address 52:54:00:4d:6d:5f in network mk-multinode-026136
	I0408 11:54:31.343514  385291 main.go:141] libmachine: (multinode-026136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:6d:5f", ip: ""} in network mk-multinode-026136: {Iface:virbr1 ExpiryTime:2024-04-08 12:52:05 +0000 UTC Type:0 Mac:52:54:00:4d:6d:5f Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-026136 Clientid:01:52:54:00:4d:6d:5f}
	I0408 11:54:31.343552  385291 main.go:141] libmachine: (multinode-026136) DBG | domain multinode-026136 has defined IP address 192.168.39.211 and MAC address 52:54:00:4d:6d:5f in network mk-multinode-026136
	I0408 11:54:31.343626  385291 host.go:66] Checking if "multinode-026136" exists ...
	I0408 11:54:31.343954  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.343996  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.359272  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0408 11:54:31.359718  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.360200  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.360229  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.360590  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.360784  385291 main.go:141] libmachine: (multinode-026136) Calling .DriverName
	I0408 11:54:31.360982  385291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:54:31.361004  385291 main.go:141] libmachine: (multinode-026136) Calling .GetSSHHostname
	I0408 11:54:31.363747  385291 main.go:141] libmachine: (multinode-026136) DBG | domain multinode-026136 has defined MAC address 52:54:00:4d:6d:5f in network mk-multinode-026136
	I0408 11:54:31.364168  385291 main.go:141] libmachine: (multinode-026136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:6d:5f", ip: ""} in network mk-multinode-026136: {Iface:virbr1 ExpiryTime:2024-04-08 12:52:05 +0000 UTC Type:0 Mac:52:54:00:4d:6d:5f Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-026136 Clientid:01:52:54:00:4d:6d:5f}
	I0408 11:54:31.364196  385291 main.go:141] libmachine: (multinode-026136) DBG | domain multinode-026136 has defined IP address 192.168.39.211 and MAC address 52:54:00:4d:6d:5f in network mk-multinode-026136
	I0408 11:54:31.364331  385291 main.go:141] libmachine: (multinode-026136) Calling .GetSSHPort
	I0408 11:54:31.364519  385291 main.go:141] libmachine: (multinode-026136) Calling .GetSSHKeyPath
	I0408 11:54:31.364664  385291 main.go:141] libmachine: (multinode-026136) Calling .GetSSHUsername
	I0408 11:54:31.364787  385291 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/multinode-026136/id_rsa Username:docker}
	I0408 11:54:31.448301  385291 ssh_runner.go:195] Run: systemctl --version
	I0408 11:54:31.454977  385291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:54:31.472587  385291 kubeconfig.go:125] found "multinode-026136" server: "https://192.168.39.211:8443"
	I0408 11:54:31.472671  385291 api_server.go:166] Checking apiserver status ...
	I0408 11:54:31.472739  385291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:54:31.488355  385291 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0408 11:54:31.500372  385291 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:54:31.500426  385291 ssh_runner.go:195] Run: ls
	I0408 11:54:31.505062  385291 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I0408 11:54:31.509377  385291 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I0408 11:54:31.509400  385291 status.go:422] multinode-026136 apiserver status = Running (err=<nil>)
	I0408 11:54:31.509410  385291 status.go:257] multinode-026136 status: &{Name:multinode-026136 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:54:31.509428  385291 status.go:255] checking status of multinode-026136-m02 ...
	I0408 11:54:31.509700  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.509732  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.525103  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0408 11:54:31.525544  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.525999  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.526022  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.526387  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.526555  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetState
	I0408 11:54:31.528070  385291 status.go:330] multinode-026136-m02 host status = "Running" (err=<nil>)
	I0408 11:54:31.528088  385291 host.go:66] Checking if "multinode-026136-m02" exists ...
	I0408 11:54:31.528420  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.528467  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.543236  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0408 11:54:31.543685  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.544193  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.544229  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.544568  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.544773  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetIP
	I0408 11:54:31.547288  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | domain multinode-026136-m02 has defined MAC address 52:54:00:10:d9:7b in network mk-multinode-026136
	I0408 11:54:31.547690  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d9:7b", ip: ""} in network mk-multinode-026136: {Iface:virbr1 ExpiryTime:2024-04-08 12:53:06 +0000 UTC Type:0 Mac:52:54:00:10:d9:7b Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-026136-m02 Clientid:01:52:54:00:10:d9:7b}
	I0408 11:54:31.547718  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | domain multinode-026136-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:d9:7b in network mk-multinode-026136
	I0408 11:54:31.547856  385291 host.go:66] Checking if "multinode-026136-m02" exists ...
	I0408 11:54:31.548137  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.548172  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.563730  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0408 11:54:31.564111  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.564591  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.564617  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.564947  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.565115  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .DriverName
	I0408 11:54:31.565299  385291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:54:31.565321  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetSSHHostname
	I0408 11:54:31.568223  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | domain multinode-026136-m02 has defined MAC address 52:54:00:10:d9:7b in network mk-multinode-026136
	I0408 11:54:31.568661  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d9:7b", ip: ""} in network mk-multinode-026136: {Iface:virbr1 ExpiryTime:2024-04-08 12:53:06 +0000 UTC Type:0 Mac:52:54:00:10:d9:7b Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-026136-m02 Clientid:01:52:54:00:10:d9:7b}
	I0408 11:54:31.568690  385291 main.go:141] libmachine: (multinode-026136-m02) DBG | domain multinode-026136-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:d9:7b in network mk-multinode-026136
	I0408 11:54:31.568855  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetSSHPort
	I0408 11:54:31.569063  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetSSHKeyPath
	I0408 11:54:31.569243  385291 main.go:141] libmachine: (multinode-026136-m02) Calling .GetSSHUsername
	I0408 11:54:31.569382  385291 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-354699/.minikube/machines/multinode-026136-m02/id_rsa Username:docker}
	I0408 11:54:31.651107  385291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:54:31.668434  385291 status.go:257] multinode-026136-m02 status: &{Name:multinode-026136-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:54:31.668482  385291 status.go:255] checking status of multinode-026136-m03 ...
	I0408 11:54:31.668840  385291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 11:54:31.668891  385291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:54:31.684471  385291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0408 11:54:31.684881  385291 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:54:31.685387  385291 main.go:141] libmachine: Using API Version  1
	I0408 11:54:31.685409  385291 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:54:31.685755  385291 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:54:31.685973  385291 main.go:141] libmachine: (multinode-026136-m03) Calling .GetState
	I0408 11:54:31.687598  385291 status.go:330] multinode-026136-m03 host status = "Stopped" (err=<nil>)
	I0408 11:54:31.687614  385291 status.go:343] host is not running, skipping remaining checks
	I0408 11:54:31.687621  385291 status.go:257] multinode-026136-m03 status: &{Name:multinode-026136-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-026136 node start m03 -v=7 --alsologtostderr: (25.526532092s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (299.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026136
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-026136
E0408 11:55:02.327553  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 11:56:30.441755  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-026136: (3m5.363326153s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026136 --wait=true -v=8 --alsologtostderr
E0408 11:59:33.487455  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026136 --wait=true -v=8 --alsologtostderr: (1m54.210150041s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026136
--- PASS: TestMultiNode/serial/RestartKeepsNodes (299.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-026136 node delete m03: (1.655279208s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 stop
E0408 12:00:02.327024  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 12:01:30.442039  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-026136 stop: (3m4.037387874s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026136 status: exit status 7 (99.979697ms)

                                                
                                                
-- stdout --
	multinode-026136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr: exit status 7 (98.048971ms)

                                                
                                                
-- stdout --
	multinode-026136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-026136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:03:03.953044  387430 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:03:03.953327  387430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:03:03.953338  387430 out.go:304] Setting ErrFile to fd 2...
	I0408 12:03:03.953344  387430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:03:03.953555  387430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 12:03:03.953766  387430 out.go:298] Setting JSON to false
	I0408 12:03:03.953799  387430 mustload.go:65] Loading cluster: multinode-026136
	I0408 12:03:03.953914  387430 notify.go:220] Checking for updates...
	I0408 12:03:03.954239  387430 config.go:182] Loaded profile config "multinode-026136": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 12:03:03.954258  387430 status.go:255] checking status of multinode-026136 ...
	I0408 12:03:03.954728  387430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 12:03:03.954796  387430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:03:03.973252  387430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0408 12:03:03.973732  387430 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:03:03.974306  387430 main.go:141] libmachine: Using API Version  1
	I0408 12:03:03.974329  387430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:03:03.974731  387430 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:03:03.974980  387430 main.go:141] libmachine: (multinode-026136) Calling .GetState
	I0408 12:03:03.976655  387430 status.go:330] multinode-026136 host status = "Stopped" (err=<nil>)
	I0408 12:03:03.976674  387430 status.go:343] host is not running, skipping remaining checks
	I0408 12:03:03.976682  387430 status.go:257] multinode-026136 status: &{Name:multinode-026136 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 12:03:03.976744  387430 status.go:255] checking status of multinode-026136-m02 ...
	I0408 12:03:03.977168  387430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 12:03:03.977224  387430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:03:03.991873  387430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0408 12:03:03.992345  387430 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:03:03.992823  387430 main.go:141] libmachine: Using API Version  1
	I0408 12:03:03.992843  387430 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:03:03.993152  387430 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:03:03.993349  387430 main.go:141] libmachine: (multinode-026136-m02) Calling .GetState
	I0408 12:03:03.994669  387430 status.go:330] multinode-026136-m02 host status = "Stopped" (err=<nil>)
	I0408 12:03:03.994688  387430 status.go:343] host is not running, skipping remaining checks
	I0408 12:03:03.994697  387430 status.go:257] multinode-026136-m02 status: &{Name:multinode-026136-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026136 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026136 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m20.931218557s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-026136 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-026136
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026136-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-026136-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (76.260129ms)

                                                
                                                
-- stdout --
	* [multinode-026136-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-026136-m02' is duplicated with machine name 'multinode-026136-m02' in profile 'multinode-026136'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-026136-m03 --driver=kvm2  --container-runtime=containerd
E0408 12:05:02.327037  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-026136-m03 --driver=kvm2  --container-runtime=containerd: (49.75556836s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-026136
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-026136: exit status 80 (240.616026ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-026136 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-026136-m03 already exists in multinode-026136-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-026136-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-026136-m03: (1.017908076s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.15s)

                                                
                                    
x
+
TestPreload (394.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-444283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0408 12:06:30.441359  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-444283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m32.376878587s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-444283 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-444283 image pull gcr.io/k8s-minikube/busybox: (2.545393843s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-444283
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-444283: (1m32.448755589s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-444283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0408 12:09:45.378702  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 12:10:02.326882  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 12:11:30.441141  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-444283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m25.380191731s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-444283 image list
helpers_test.go:175: Cleaning up "test-preload-444283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-444283
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-444283: (1.057108867s)
--- PASS: TestPreload (394.06s)

                                                
                                    
x
+
TestScheduledStopUnix (115.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-825622 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-825622 --memory=2048 --driver=kvm2  --container-runtime=containerd: (44.100872933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825622 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-825622 -n scheduled-stop-825622
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825622 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825622 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825622 -n scheduled-stop-825622
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825622
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825622 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825622
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-825622: exit status 7 (75.966174ms)

                                                
                                                
-- stdout --
	scheduled-stop-825622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825622 -n scheduled-stop-825622
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825622 -n scheduled-stop-825622: exit status 7 (75.189877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-825622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-825622
--- PASS: TestScheduledStopUnix (115.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (198.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.725941051 start -p running-upgrade-906817 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0408 12:16:30.441975  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.725941051 start -p running-upgrade-906817 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m59.439248065s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-906817 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-906817 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.051927409s)
helpers_test.go:175: Cleaning up "running-upgrade-906817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-906817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-906817: (1.313892357s)
--- PASS: TestRunningBinaryUpgrade (198.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (216.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m17.979431413s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-617765
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-617765: (2.310377351s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-617765 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-617765 status --format={{.Host}}: exit status 7 (77.154512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m43.150950269s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-617765 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (94.95335ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-617765] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-617765
	    minikube start -p kubernetes-upgrade-617765 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6177652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-617765 --kubernetes-version=v1.30.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-617765 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (31.403388118s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-617765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-617765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-617765: (1.266615854s)
--- PASS: TestKubernetesUpgrade (216.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (102.601092ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-405120] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (124.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405120 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405120 --driver=kvm2  --container-runtime=containerd: (2m4.351863551s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-405120 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (124.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-194741 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-194741 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (218.237947ms)

                                                
                                                
-- stdout --
	* [false-194741] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:15:05.439475  392533 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:15:05.439733  392533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:15:05.439745  392533 out.go:304] Setting ErrFile to fd 2...
	I0408 12:15:05.439749  392533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:15:05.440000  392533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-354699/.minikube/bin
	I0408 12:15:05.440609  392533 out.go:298] Setting JSON to false
	I0408 12:15:05.441662  392533 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7049,"bootTime":1712571457,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:15:05.441777  392533 start.go:139] virtualization: kvm guest
	I0408 12:15:05.443721  392533 out.go:177] * [false-194741] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:15:05.445238  392533 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:15:05.446551  392533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:15:05.445196  392533 notify.go:220] Checking for updates...
	I0408 12:15:05.448982  392533 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-354699/kubeconfig
	I0408 12:15:05.450190  392533 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-354699/.minikube
	I0408 12:15:05.451385  392533 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:15:05.452661  392533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:15:05.454397  392533 config.go:182] Loaded profile config "NoKubernetes-405120": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 12:15:05.454503  392533 config.go:182] Loaded profile config "cert-expiration-474417": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 12:15:05.454601  392533 config.go:182] Loaded profile config "offline-containerd-385972": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 12:15:05.454686  392533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:15:05.584063  392533 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 12:15:05.585302  392533 start.go:297] selected driver: kvm2
	I0408 12:15:05.585318  392533 start.go:901] validating driver "kvm2" against <nil>
	I0408 12:15:05.585329  392533 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:15:05.587212  392533 out.go:177] 
	W0408 12:15:05.588368  392533 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0408 12:15:05.589475  392533 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-194741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.90:8443
name: cert-expiration-474417
contexts:
- context:
cluster: cert-expiration-474417
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-474417
name: cert-expiration-474417
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474417
user:
client-certificate: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.crt
client-key: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-194741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-194741"

                                                
                                                
----------------------- debugLogs end: false-194741 [took: 3.624886787s] --------------------------------
helpers_test.go:175: Cleaning up "false-194741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-194741
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (17.718138019s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-405120 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-405120 status -o json: exit status 2 (249.172293ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-405120","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-405120
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-405120: (1.077368923s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0408 12:16:13.488048  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405120 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.080681731s)
--- PASS: TestNoKubernetes/serial/Start (27.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-405120 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-405120 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.967232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-405120
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-405120: (1.490302326s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405120 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405120 --driver=kvm2  --container-runtime=containerd: (44.268525s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-405120 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-405120 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.796988ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (140.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3959749433 start -p stopped-upgrade-496279 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3959749433 start -p stopped-upgrade-496279 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m14.61935143s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3959749433 -p stopped-upgrade-496279 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3959749433 -p stopped-upgrade-496279 stop: (2.193253844s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-496279 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-496279 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.261543352s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (140.07s)

                                                
                                    
x
+
TestPause/serial/Start (63.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-412002 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-412002 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m3.577512909s)
--- PASS: TestPause/serial/Start (63.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-496279
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-496279: (1.094546984s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (111.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m51.926386518s)
--- PASS: TestNetworkPlugins/group/auto/Start (111.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (115.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m55.085853875s)
--- PASS: TestNetworkPlugins/group/flannel/Start (115.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (153.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0408 12:20:02.326986  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m33.409488795s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (153.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (83.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-412002 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0408 12:21:30.441044  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-412002 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.783095567s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (83.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4t67m" [e6c9f7b3-e37e-4d2e-9c1b-5aa3c5dfb260] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4t67m" [e6c9f7b3-e37e-4d2e-9c1b-5aa3c5dfb260] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.009214083s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k6lc9" [49d6c63c-29f7-427d-a007-c06333839303] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004896227s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jhpzs" [2a642644-96f9-4e20-90c6-a558a8188922] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jhpzs" [2a642644-96f9-4e20-90c6-a558a8188922] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.011582152s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-412002 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-412002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-412002 --output=json --layout=cluster: exit status 2 (278.063179ms)

                                                
                                                
-- stdout --
	{"Name":"pause-412002","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-412002","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-412002 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-412002 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-412002 --alsologtostderr -v=5: (1.001570385s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-412002 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m5.893279248s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (116.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m56.385363495s)
--- PASS: TestNetworkPlugins/group/calico/Start (116.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m46.374249944s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c6dc4" [81ffac45-c48d-4bd5-9878-3a54473d8acd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c6dc4" [81ffac45-c48d-4bd5-9878-3a54473d8acd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005282942s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (123.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-194741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m3.020263604s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (123.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-svjxg" [8aaa3525-7704-48f8-bd0d-f46ce57841b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-svjxg" [8aaa3525-7704-48f8-bd0d-f46ce57841b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005744123s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (205.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-062197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-062197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m25.645929407s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (205.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cqvl4" [9ad320d9-11f2-4b51-865e-186bcf884e5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006560518s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dmg7c" [b2cac9bd-010e-4563-a014-227c1ada5173] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00550956s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s8kqx" [51c89e05-311a-427d-ab13-00244e15b6eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s8kqx" [51c89e05-311a-427d-ab13-00244e15b6eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005338142s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wphwz" [dac26b59-3a6e-4ffe-858a-f00f63d4606d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wphwz" [dac26b59-3a6e-4ffe-858a-f00f63d4606d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005334905s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (178.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0: (2m58.802248606s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (178.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-971363 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 12:25:02.326899  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-971363 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m31.005356823s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-194741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-194741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sj65g" [c8e023fd-2328-4a64-909f-bf4616252d07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sj65g" [c8e023fd-2328-4a64-909f-bf4616252d07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.009229242s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-194741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-194741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)
E0408 12:32:48.195004  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:32:53.488931  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 12:33:03.264742  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (71.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-460740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-460740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0: (1m11.948256257s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (71.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-971363 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee739baa-051f-40e4-a025-3bfa0ea97e34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee739baa-051f-40e4-a025-3bfa0ea97e34] Running
E0408 12:26:25.379587  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005023457s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-971363 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-971363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 12:26:30.440672  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-971363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003289892s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-971363 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-971363 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-971363 --alsologtostderr -v=3: (1m32.03220901s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-460740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 12:26:44.691995  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:44.697276  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:44.708192  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:44.729238  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:44.769577  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:44.849985  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:45.010433  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:45.331072  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-460740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.311397889s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-460740 --alsologtostderr -v=3
E0408 12:26:45.971968  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:47.252418  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-460740 --alsologtostderr -v=3: (2.478038611s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-460740 -n newest-cni-460740
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-460740 -n newest-cni-460740: exit status 7 (100.146508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-460740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-460740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0
E0408 12:26:49.812946  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:52.213633  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.218932  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.229279  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.249743  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.290091  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.370563  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.531006  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:52.852065  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:53.492290  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:54.772960  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:26:54.933331  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:26:57.333227  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:27:02.454320  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:27:05.174074  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-460740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0: (36.972848855s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-460740 -n newest-cni-460740
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-062197 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c69397b-b6af-4239-8bd2-2c4080a83ed2] Pending
helpers_test.go:344: "busybox" [8c69397b-b6af-4239-8bd2-2c4080a83ed2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0408 12:27:12.694744  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8c69397b-b6af-4239-8bd2-2c4080a83ed2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00448471s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-062197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-062197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-062197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-062197 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-062197 --alsologtostderr -v=3: (1m32.483537617s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-460740 image list --format=json
E0408 12:27:25.654881  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-460740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-460740 -n newest-cni-460740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-460740 -n newest-cni-460740: exit status 2 (255.721833ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-460740 -n newest-cni-460740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-460740 -n newest-cni-460740: exit status 2 (251.730739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-460740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-460740 -n newest-cni-460740
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-460740 -n newest-cni-460740
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-096690 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 12:27:33.175347  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:27:35.579939  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.585197  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.595443  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.615702  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.656400  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.736794  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:35.897199  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:36.218202  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:36.858788  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:38.139746  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:27:40.700061  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-096690 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m0.970961549s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-831797 create -f testdata/busybox.yaml
E0408 12:27:45.820857  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1482635d-d8de-4719-922e-f62ff02bb6c8] Pending
helpers_test.go:344: "busybox" [1482635d-d8de-4719-922e-f62ff02bb6c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1482635d-d8de-4719-922e-f62ff02bb6c8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003991468s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-831797 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-831797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 12:27:56.061673  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-831797 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-831797 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-831797 --alsologtostderr -v=3: (1m32.520555256s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363: exit status 7 (80.263192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-971363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-971363 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 12:28:06.615931  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:28:14.136207  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:28:15.869283  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:15.874623  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:15.884965  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:15.905261  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:15.945589  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:16.025911  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:16.186347  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:16.507318  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:16.542487  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:28:17.147966  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:18.428745  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:20.989854  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:26.110416  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-971363 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m59.944049119s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-096690 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb73a88c-a76b-46a2-9b9d-9df1a7201453] Pending
helpers_test.go:344: "busybox" [bb73a88c-a76b-46a2-9b9d-9df1a7201453] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb73a88c-a76b-46a2-9b9d-9df1a7201453] Running
E0408 12:28:36.351227  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005298768s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-096690 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-096690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-096690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022558674s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-096690 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-096690 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-096690 --alsologtostderr -v=3: (1m32.539011039s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-062197 -n old-k8s-version-062197
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-062197 -n old-k8s-version-062197: exit status 7 (105.75285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-062197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (206.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-062197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0408 12:28:56.832088  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:28:57.502828  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:29:08.603412  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.608740  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.619021  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.639374  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.679674  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.760886  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:08.921316  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:09.241921  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:09.882223  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:11.163138  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:12.615714  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.621000  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.631300  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.651593  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.691940  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.772275  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:12.932737  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:13.252884  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:13.724171  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:13.893731  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:15.174917  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:17.735501  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:18.845319  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:22.856381  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:28.536276  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:29:29.085540  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-062197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m26.029087989s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-062197 -n old-k8s-version-062197
E0408 12:32:19.898064  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (206.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831797 -n no-preload-831797
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831797 -n no-preload-831797: exit status 7 (76.508058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-831797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (312.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0
E0408 12:29:33.097287  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:29:36.056881  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:29:37.792737  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:29:49.566649  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:29:53.578430  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:30:02.326878  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
E0408 12:30:04.351220  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.356506  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.366772  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.387094  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.427443  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.507981  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.668423  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:04.989182  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:05.629703  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:06.910058  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:09.471138  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.0: (5m12.582023605s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831797 -n no-preload-831797
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (312.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096690 -n embed-certs-096690
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096690 -n embed-certs-096690: exit status 7 (86.54032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-096690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (385.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-096690 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 12:30:14.592240  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:19.423954  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/enable-default-cni-194741/client.crt: no such file or directory
E0408 12:30:24.833281  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:30.527223  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:30:34.538856  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:30:45.313497  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:30:59.713750  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
E0408 12:31:26.273733  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/custom-flannel-194741/client.crt: no such file or directory
E0408 12:31:30.441141  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/functional-059950/client.crt: no such file or directory
E0408 12:31:44.691398  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
E0408 12:31:52.213672  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
E0408 12:31:52.448311  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/calico-194741/client.crt: no such file or directory
E0408 12:31:56.459589  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/kindnet-194741/client.crt: no such file or directory
E0408 12:32:12.376708  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-096690 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (6m24.985909456s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096690 -n embed-certs-096690
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (385.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k979f" [82cc76dc-82c3-40b6-b094-26890748b498] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004629796s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k979f" [82cc76dc-82c3-40b6-b094-26890748b498] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00491082s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-062197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-062197 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-062197 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-062197 -n old-k8s-version-062197
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-062197 -n old-k8s-version-062197: exit status 2 (272.383895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-062197 -n old-k8s-version-062197
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-062197 -n old-k8s-version-062197: exit status 2 (268.326787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-062197 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-062197 -n old-k8s-version-062197
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-062197 -n old-k8s-version-062197
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h9x87" [159027e5-b20a-41b9-9dd6-e2470a42d932] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00653589s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h9x87" [159027e5-b20a-41b9-9dd6-e2470a42d932] Running
E0408 12:33:15.869124  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/bridge-194741/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00466015s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-971363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-971363 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-971363 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363: exit status 2 (266.939521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363: exit status 2 (270.742113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-971363 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-971363 -n default-k8s-diff-port-971363
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-ph5lb" [8d5f29bf-16d4-4b94-afb8-5cf7c2b85969] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-ph5lb" [8d5f29bf-16d4-4b94-afb8-5cf7c2b85969] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004538672s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-ph5lb" [8d5f29bf-16d4-4b94-afb8-5cf7c2b85969] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005578224s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-831797 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-831797 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-831797 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831797 -n no-preload-831797
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831797 -n no-preload-831797: exit status 2 (244.929519ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831797 -n no-preload-831797
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831797 -n no-preload-831797: exit status 2 (251.157663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-831797 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831797 -n no-preload-831797
E0408 12:35:02.326676  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831797 -n no-preload-831797
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-klktb" [2b23b52b-332d-48b2-a8be-db90ff0a4b25] Running
E0408 12:36:41.651242  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/default-k8s-diff-port-971363/client.crt: no such file or directory
E0408 12:36:44.691960  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/auto-194741/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004347069s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-klktb" [2b23b52b-332d-48b2-a8be-db90ff0a4b25] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005135842s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-096690 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-096690 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-096690 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096690 -n embed-certs-096690
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096690 -n embed-certs-096690: exit status 2 (254.260957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096690 -n embed-certs-096690
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096690 -n embed-certs-096690: exit status 2 (252.166369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-096690 --alsologtostderr -v=1
E0408 12:36:52.214013  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/flannel-194741/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096690 -n embed-certs-096690
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096690 -n embed-certs-096690
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.0/binaries 0
25 TestDownloadOnly/v1.30.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
149 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 3.42
271 TestNetworkPlugins/group/cilium 5.72
286 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0408 12:15:02.326660  362025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/addons-400631/client.crt: no such file or directory
panic.go:626: 
----------------------- debugLogs start: kubenet-194741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.90:8443
name: cert-expiration-474417
contexts:
- context:
cluster: cert-expiration-474417
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-474417
name: cert-expiration-474417
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474417
user:
client-certificate: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.crt
client-key: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-194741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-194741"

                                                
                                                
----------------------- debugLogs end: kubenet-194741 [took: 3.262583137s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-194741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-194741
--- SKIP: TestNetworkPlugins/group/kubenet (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-194741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-194741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-354699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.90:8443
name: cert-expiration-474417
contexts:
- context:
cluster: cert-expiration-474417
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:14:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-474417
name: cert-expiration-474417
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474417
user:
client-certificate: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.crt
client-key: /home/jenkins/minikube-integration/18588-354699/.minikube/profiles/cert-expiration-474417/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-194741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-194741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194741"

                                                
                                                
----------------------- debugLogs end: cilium-194741 [took: 5.555838104s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-194741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-194741
--- SKIP: TestNetworkPlugins/group/cilium (5.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-353729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-353729
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard