Test Report: KVM_Linux_containerd 17822

                    
                      1b14f6e8a127ccddfb64acb15c203e20bb49b800:2023-12-18:32341
                    
                

Test fail (1/313)

Order failed test Duration
41 TestAddons/parallel/Headlamp 3.12
x
+
TestAddons/parallel/Headlamp (3.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-522125 --alsologtostderr -v=1
addons_test.go:823: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-522125 --alsologtostderr -v=1: exit status 11 (356.945146ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:43:05.962184   15728 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:43:05.962353   15728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:43:05.962365   15728 out.go:309] Setting ErrFile to fd 2...
	I1218 22:43:05.962370   15728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:43:05.962563   15728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 22:43:05.962823   15728 mustload.go:65] Loading cluster: addons-522125
	I1218 22:43:05.963147   15728 config.go:182] Loaded profile config "addons-522125": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:43:05.963165   15728 addons.go:594] checking whether the cluster is paused
	I1218 22:43:05.963276   15728 config.go:182] Loaded profile config "addons-522125": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:43:05.963288   15728 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:43:05.963630   15728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:43:05.963669   15728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:43:05.977552   15728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1218 22:43:05.978040   15728 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:43:05.978741   15728 main.go:141] libmachine: Using API Version  1
	I1218 22:43:05.978774   15728 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:43:05.979101   15728 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:43:05.979294   15728 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:43:05.981155   15728 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:43:05.981360   15728 ssh_runner.go:195] Run: systemctl --version
	I1218 22:43:05.981384   15728 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:43:05.983879   15728 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:43:05.984407   15728 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:43:05.984433   15728 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:43:05.984597   15728 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:43:05.984775   15728 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:43:05.984931   15728 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:43:05.985068   15728 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:43:06.088402   15728 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 22:43:06.088477   15728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 22:43:06.172073   15728 cri.go:89] found id: "59f3aa7bf282a8b9ba624981420214fc784b492d637faec7633006133f20ba40"
	I1218 22:43:06.172109   15728 cri.go:89] found id: "d43049fd3fa0346b65b4c711b8abc4edf9cdd4e193823646b13f84af3d6d7497"
	I1218 22:43:06.172117   15728 cri.go:89] found id: "7fcfbd4b0140c32f7dfe512e7409dcccad2ec928568f8427ae2f304542bf92ad"
	I1218 22:43:06.172123   15728 cri.go:89] found id: "0017e05ba3151175c4243a7cf0a46c4da25e0161d0f7496cb6bc6670dff32265"
	I1218 22:43:06.172128   15728 cri.go:89] found id: "e12de51fba43caf710a5b2e02302a3a04471fbd03b3937f6ea2b738d38e9451e"
	I1218 22:43:06.172139   15728 cri.go:89] found id: "889bfb730e7d7a7a9293666658cb3fc502c63e4d5f6401e454ffa101a691952e"
	I1218 22:43:06.172143   15728 cri.go:89] found id: "6a2c5fcec811f5c845d5c03f719f423f2e730540d48a43702f470d72b9e253dd"
	I1218 22:43:06.172146   15728 cri.go:89] found id: "2a491c136d64a51db8286a452f2286b8d2893fbfc235de5fef8f457603cf9a38"
	I1218 22:43:06.172149   15728 cri.go:89] found id: "70035f67f35b504bcd4d816678f227cc3de333fa3d8da7676f31468f3804db59"
	I1218 22:43:06.172162   15728 cri.go:89] found id: "7320bde2a3f20633d2cc2b12a184e7e8835fd0edc0085d9e2fd63df47d1f8a4e"
	I1218 22:43:06.172170   15728 cri.go:89] found id: "1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06"
	I1218 22:43:06.172187   15728 cri.go:89] found id: "cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea"
	I1218 22:43:06.172197   15728 cri.go:89] found id: "bac74863cb3d77e0f05d202c0c9e3a116cf7c080c91d7ccbe113110c41f8b68f"
	I1218 22:43:06.172206   15728 cri.go:89] found id: "3044e7ea28bbd5337038b0d10c0fe3f8af91a81cca8c762dc1809a37d6e5d24e"
	I1218 22:43:06.172214   15728 cri.go:89] found id: "764ec01b05490cfef69fa79a2c6154445fdc655a18274418b5f8c9e2069b7036"
	I1218 22:43:06.172219   15728 cri.go:89] found id: "9c928c7e3cf5e53b9a52d782895bbe4287099e27aa162d091b3474d978efd6d0"
	I1218 22:43:06.172228   15728 cri.go:89] found id: "3b35022d7f3f37c0fcd745e49d682d7fbd28c452176957b39ce07478c4c33a5a"
	I1218 22:43:06.172242   15728 cri.go:89] found id: "cae58447199889b38bd869f292531cf50de98d4503597566def581d1762bd1d9"
	I1218 22:43:06.172252   15728 cri.go:89] found id: "7ea083d61aa0619aea833be91dad56867efc0c1471c51ace6ccd9abab50fa1ef"
	I1218 22:43:06.172258   15728 cri.go:89] found id: "305a65fdd1886389ada3c7db940da82dba5546315e40f71dd7c5673d4a8969bc"
	I1218 22:43:06.172267   15728 cri.go:89] found id: "4afe1b38007de3490feb9b29573f6cb34020ba355282306ca97aeb17d87a155f"
	I1218 22:43:06.172273   15728 cri.go:89] found id: "bc1bb7a6acd286e4984072e9f4cf8d32f62bd6e8ba8ac997c0870d29b81d83f0"
	I1218 22:43:06.172282   15728 cri.go:89] found id: ""
	I1218 22:43:06.172355   15728 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1218 22:43:06.255132   15728 main.go:141] libmachine: Making call to close driver server
	I1218 22:43:06.255149   15728 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:43:06.255433   15728 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:43:06.255459   15728 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:43:06.257728   15728 out.go:177] 
	W1218 22:43:06.259054   15728 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-18T22:43:06Z" level=error msg="stat /run/containerd/runc/k8s.io/9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-18T22:43:06Z" level=error msg="stat /run/containerd/runc/k8s.io/9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c: no such file or directory"
	
	W1218 22:43:06.259087   15728 out.go:239] * 
	* 
	W1218 22:43:06.260789   15728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 22:43:06.262399   15728 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:825: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-522125 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-522125 -n addons-522125
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-522125 logs -n 25: (1.9264019s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | -p download-only-134172                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:38 UTC |                     |
	|         | -p download-only-134172                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:39 UTC |                     |
	|         | -p download-only-134172                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	| delete  | -p download-only-134172                                                                     | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	| delete  | -p download-only-134172                                                                     | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-383810 | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC |                     |
	|         | binary-mirror-383810                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35955                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-383810                                                                     | binary-mirror-383810 | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	| addons  | disable dashboard -p                                                                        | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC |                     |
	|         | addons-522125                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC |                     |
	|         | addons-522125                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-522125 --wait=true                                                                | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-522125 addons                                                                        | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:42 UTC | 18 Dec 23 22:42 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:42 UTC | 18 Dec 23 22:43 UTC |
	|         | addons-522125                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-522125 ssh cat                                                                       | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:43 UTC | 18 Dec 23 22:43 UTC |
	|         | /opt/local-path-provisioner/pvc-5df8efa3-be39-4112-b71b-2a19c24d1a6e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-522125 addons disable                                                                | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:43 UTC | 18 Dec 23 22:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-522125 ip                                                                            | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:43 UTC | 18 Dec 23 22:43 UTC |
	| addons  | addons-522125 addons disable                                                                | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:43 UTC | 18 Dec 23 22:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-522125        | jenkins | v1.32.0 | 18 Dec 23 22:43 UTC |                     |
	|         | -p addons-522125                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:40:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:40:12.762127   14354 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:40:12.762355   14354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:40:12.762363   14354 out.go:309] Setting ErrFile to fd 2...
	I1218 22:40:12.762367   14354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:40:12.762552   14354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 22:40:12.763134   14354 out.go:303] Setting JSON to false
	I1218 22:40:12.763878   14354 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1359,"bootTime":1702937854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:40:12.763932   14354 start.go:138] virtualization: kvm guest
	I1218 22:40:12.766217   14354 out.go:177] * [addons-522125] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 22:40:12.767873   14354 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:40:12.769438   14354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:40:12.767934   14354 notify.go:220] Checking for updates...
	I1218 22:40:12.772321   14354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:40:12.773809   14354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:40:12.775268   14354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 22:40:12.776835   14354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:40:12.778637   14354 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:40:12.809363   14354 out.go:177] * Using the kvm2 driver based on user configuration
	I1218 22:40:12.810753   14354 start.go:298] selected driver: kvm2
	I1218 22:40:12.810768   14354 start.go:902] validating driver "kvm2" against <nil>
	I1218 22:40:12.810778   14354 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:40:12.811426   14354 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:40:12.811496   14354 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17822-6323/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 22:40:12.825327   14354 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 22:40:12.825396   14354 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 22:40:12.825607   14354 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 22:40:12.825660   14354 cni.go:84] Creating CNI manager for ""
	I1218 22:40:12.825672   14354 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:40:12.825681   14354 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 22:40:12.825691   14354 start_flags.go:323] config:
	{Name:addons-522125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-522125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:40:12.825803   14354 iso.go:125] acquiring lock: {Name:mk45271b640590b559d12c4c43666d7b9d627a43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:40:12.827918   14354 out.go:177] * Starting control plane node addons-522125 in cluster addons-522125
	I1218 22:40:12.829411   14354 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 22:40:12.829453   14354 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1218 22:40:12.829460   14354 cache.go:56] Caching tarball of preloaded images
	I1218 22:40:12.829548   14354 preload.go:174] Found /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 22:40:12.829562   14354 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1218 22:40:12.829847   14354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/config.json ...
	I1218 22:40:12.829870   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/config.json: {Name:mk269cb0dc3ee2017cdb4aa72b1b9ce9441b6bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:12.830022   14354 start.go:365] acquiring machines lock for addons-522125: {Name:mk3e2b0d1bfd222fd8e0d9e300c76810d4b00f9d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 22:40:12.830081   14354 start.go:369] acquired machines lock for "addons-522125" in 42.388µs
	I1218 22:40:12.830105   14354 start.go:93] Provisioning new machine with config: &{Name:addons-522125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-522125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 22:40:12.830177   14354 start.go:125] createHost starting for "" (driver="kvm2")
	I1218 22:40:12.831993   14354 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1218 22:40:12.832149   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:40:12.832204   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:40:12.845482   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I1218 22:40:12.845937   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:40:12.846502   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:40:12.846523   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:40:12.846884   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:40:12.847068   14354 main.go:141] libmachine: (addons-522125) Calling .GetMachineName
	I1218 22:40:12.847223   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:12.847395   14354 start.go:159] libmachine.API.Create for "addons-522125" (driver="kvm2")
	I1218 22:40:12.847430   14354 client.go:168] LocalClient.Create starting
	I1218 22:40:12.847477   14354 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem
	I1218 22:40:13.348652   14354 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/cert.pem
	I1218 22:40:13.484516   14354 main.go:141] libmachine: Running pre-create checks...
	I1218 22:40:13.484538   14354 main.go:141] libmachine: (addons-522125) Calling .PreCreateCheck
	I1218 22:40:13.485037   14354 main.go:141] libmachine: (addons-522125) Calling .GetConfigRaw
	I1218 22:40:13.485545   14354 main.go:141] libmachine: Creating machine...
	I1218 22:40:13.485559   14354 main.go:141] libmachine: (addons-522125) Calling .Create
	I1218 22:40:13.485712   14354 main.go:141] libmachine: (addons-522125) Creating KVM machine...
	I1218 22:40:13.486910   14354 main.go:141] libmachine: (addons-522125) DBG | found existing default KVM network
	I1218 22:40:13.487628   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:13.487500   14376 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f210}
	I1218 22:40:13.493384   14354 main.go:141] libmachine: (addons-522125) DBG | trying to create private KVM network mk-addons-522125 192.168.39.0/24...
	I1218 22:40:13.557862   14354 main.go:141] libmachine: (addons-522125) DBG | private KVM network mk-addons-522125 192.168.39.0/24 created
	I1218 22:40:13.557899   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:13.557840   14376 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:40:13.557914   14354 main.go:141] libmachine: (addons-522125) Setting up store path in /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125 ...
	I1218 22:40:13.557941   14354 main.go:141] libmachine: (addons-522125) Building disk image from file:///home/jenkins/minikube-integration/17822-6323/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1218 22:40:13.557991   14354 main.go:141] libmachine: (addons-522125) Downloading /home/jenkins/minikube-integration/17822-6323/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17822-6323/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1218 22:40:13.782068   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:13.781917   14376 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa...
	I1218 22:40:13.941413   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:13.941282   14376 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/addons-522125.rawdisk...
	I1218 22:40:13.941450   14354 main.go:141] libmachine: (addons-522125) DBG | Writing magic tar header
	I1218 22:40:13.941465   14354 main.go:141] libmachine: (addons-522125) DBG | Writing SSH key tar header
	I1218 22:40:13.941481   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:13.941387   14376 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125 ...
	I1218 22:40:13.941499   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125
	I1218 22:40:13.941526   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17822-6323/.minikube/machines
	I1218 22:40:13.941551   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125 (perms=drwx------)
	I1218 22:40:13.941568   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins/minikube-integration/17822-6323/.minikube/machines (perms=drwxr-xr-x)
	I1218 22:40:13.941582   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:40:13.941600   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17822-6323
	I1218 22:40:13.941619   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1218 22:40:13.941635   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins/minikube-integration/17822-6323/.minikube (perms=drwxr-xr-x)
	I1218 22:40:13.941656   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins/minikube-integration/17822-6323 (perms=drwxrwxr-x)
	I1218 22:40:13.941671   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1218 22:40:13.941693   14354 main.go:141] libmachine: (addons-522125) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1218 22:40:13.941701   14354 main.go:141] libmachine: (addons-522125) Creating domain...
	I1218 22:40:13.941743   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home/jenkins
	I1218 22:40:13.941775   14354 main.go:141] libmachine: (addons-522125) DBG | Checking permissions on dir: /home
	I1218 22:40:13.941788   14354 main.go:141] libmachine: (addons-522125) DBG | Skipping /home - not owner
	I1218 22:40:13.942602   14354 main.go:141] libmachine: (addons-522125) define libvirt domain using xml: 
	I1218 22:40:13.942625   14354 main.go:141] libmachine: (addons-522125) <domain type='kvm'>
	I1218 22:40:13.942633   14354 main.go:141] libmachine: (addons-522125)   <name>addons-522125</name>
	I1218 22:40:13.942638   14354 main.go:141] libmachine: (addons-522125)   <memory unit='MiB'>4000</memory>
	I1218 22:40:13.942644   14354 main.go:141] libmachine: (addons-522125)   <vcpu>2</vcpu>
	I1218 22:40:13.942653   14354 main.go:141] libmachine: (addons-522125)   <features>
	I1218 22:40:13.942659   14354 main.go:141] libmachine: (addons-522125)     <acpi/>
	I1218 22:40:13.942665   14354 main.go:141] libmachine: (addons-522125)     <apic/>
	I1218 22:40:13.942671   14354 main.go:141] libmachine: (addons-522125)     <pae/>
	I1218 22:40:13.942675   14354 main.go:141] libmachine: (addons-522125)     
	I1218 22:40:13.942684   14354 main.go:141] libmachine: (addons-522125)   </features>
	I1218 22:40:13.942689   14354 main.go:141] libmachine: (addons-522125)   <cpu mode='host-passthrough'>
	I1218 22:40:13.942697   14354 main.go:141] libmachine: (addons-522125)   
	I1218 22:40:13.942702   14354 main.go:141] libmachine: (addons-522125)   </cpu>
	I1218 22:40:13.942729   14354 main.go:141] libmachine: (addons-522125)   <os>
	I1218 22:40:13.942760   14354 main.go:141] libmachine: (addons-522125)     <type>hvm</type>
	I1218 22:40:13.942773   14354 main.go:141] libmachine: (addons-522125)     <boot dev='cdrom'/>
	I1218 22:40:13.942785   14354 main.go:141] libmachine: (addons-522125)     <boot dev='hd'/>
	I1218 22:40:13.942799   14354 main.go:141] libmachine: (addons-522125)     <bootmenu enable='no'/>
	I1218 22:40:13.942817   14354 main.go:141] libmachine: (addons-522125)   </os>
	I1218 22:40:13.942837   14354 main.go:141] libmachine: (addons-522125)   <devices>
	I1218 22:40:13.942853   14354 main.go:141] libmachine: (addons-522125)     <disk type='file' device='cdrom'>
	I1218 22:40:13.942872   14354 main.go:141] libmachine: (addons-522125)       <source file='/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/boot2docker.iso'/>
	I1218 22:40:13.942885   14354 main.go:141] libmachine: (addons-522125)       <target dev='hdc' bus='scsi'/>
	I1218 22:40:13.942898   14354 main.go:141] libmachine: (addons-522125)       <readonly/>
	I1218 22:40:13.942909   14354 main.go:141] libmachine: (addons-522125)     </disk>
	I1218 22:40:13.942928   14354 main.go:141] libmachine: (addons-522125)     <disk type='file' device='disk'>
	I1218 22:40:13.942942   14354 main.go:141] libmachine: (addons-522125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1218 22:40:13.942959   14354 main.go:141] libmachine: (addons-522125)       <source file='/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/addons-522125.rawdisk'/>
	I1218 22:40:13.942972   14354 main.go:141] libmachine: (addons-522125)       <target dev='hda' bus='virtio'/>
	I1218 22:40:13.942985   14354 main.go:141] libmachine: (addons-522125)     </disk>
	I1218 22:40:13.943124   14354 main.go:141] libmachine: (addons-522125)     <interface type='network'>
	I1218 22:40:13.943150   14354 main.go:141] libmachine: (addons-522125)       <source network='mk-addons-522125'/>
	I1218 22:40:13.943164   14354 main.go:141] libmachine: (addons-522125)       <model type='virtio'/>
	I1218 22:40:13.943172   14354 main.go:141] libmachine: (addons-522125)     </interface>
	I1218 22:40:13.943184   14354 main.go:141] libmachine: (addons-522125)     <interface type='network'>
	I1218 22:40:13.943197   14354 main.go:141] libmachine: (addons-522125)       <source network='default'/>
	I1218 22:40:13.943219   14354 main.go:141] libmachine: (addons-522125)       <model type='virtio'/>
	I1218 22:40:13.943231   14354 main.go:141] libmachine: (addons-522125)     </interface>
	I1218 22:40:13.943245   14354 main.go:141] libmachine: (addons-522125)     <serial type='pty'>
	I1218 22:40:13.943260   14354 main.go:141] libmachine: (addons-522125)       <target port='0'/>
	I1218 22:40:13.943270   14354 main.go:141] libmachine: (addons-522125)     </serial>
	I1218 22:40:13.943279   14354 main.go:141] libmachine: (addons-522125)     <console type='pty'>
	I1218 22:40:13.943299   14354 main.go:141] libmachine: (addons-522125)       <target type='serial' port='0'/>
	I1218 22:40:13.943307   14354 main.go:141] libmachine: (addons-522125)     </console>
	I1218 22:40:13.943317   14354 main.go:141] libmachine: (addons-522125)     <rng model='virtio'>
	I1218 22:40:13.943327   14354 main.go:141] libmachine: (addons-522125)       <backend model='random'>/dev/random</backend>
	I1218 22:40:13.943340   14354 main.go:141] libmachine: (addons-522125)     </rng>
	I1218 22:40:13.943352   14354 main.go:141] libmachine: (addons-522125)     
	I1218 22:40:13.943363   14354 main.go:141] libmachine: (addons-522125)     
	I1218 22:40:13.943376   14354 main.go:141] libmachine: (addons-522125)   </devices>
	I1218 22:40:13.943386   14354 main.go:141] libmachine: (addons-522125) </domain>
	I1218 22:40:13.943398   14354 main.go:141] libmachine: (addons-522125) 
	I1218 22:40:13.948831   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9a:ba:a3 in network default
	I1218 22:40:13.949363   14354 main.go:141] libmachine: (addons-522125) Ensuring networks are active...
	I1218 22:40:13.949387   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:13.950055   14354 main.go:141] libmachine: (addons-522125) Ensuring network default is active
	I1218 22:40:13.950433   14354 main.go:141] libmachine: (addons-522125) Ensuring network mk-addons-522125 is active
	I1218 22:40:13.950899   14354 main.go:141] libmachine: (addons-522125) Getting domain xml...
	I1218 22:40:13.951493   14354 main.go:141] libmachine: (addons-522125) Creating domain...
	I1218 22:40:15.385331   14354 main.go:141] libmachine: (addons-522125) Waiting to get IP...
	I1218 22:40:15.386180   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:15.386598   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:15.386647   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:15.386582   14376 retry.go:31] will retry after 230.51413ms: waiting for machine to come up
	I1218 22:40:15.619180   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:15.619518   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:15.619545   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:15.619480   14376 retry.go:31] will retry after 289.441775ms: waiting for machine to come up
	I1218 22:40:15.910959   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:15.911444   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:15.911478   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:15.911357   14376 retry.go:31] will retry after 382.267634ms: waiting for machine to come up
	I1218 22:40:16.294992   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:16.295433   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:16.295456   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:16.295370   14376 retry.go:31] will retry after 597.38835ms: waiting for machine to come up
	I1218 22:40:16.894317   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:16.894777   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:16.894801   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:16.894726   14376 retry.go:31] will retry after 670.971652ms: waiting for machine to come up
	I1218 22:40:17.567909   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:17.568215   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:17.568234   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:17.568158   14376 retry.go:31] will retry after 683.905948ms: waiting for machine to come up
	I1218 22:40:18.253767   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:18.254151   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:18.254174   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:18.254108   14376 retry.go:31] will retry after 737.029938ms: waiting for machine to come up
	I1218 22:40:18.993115   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:18.993649   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:18.993677   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:18.993582   14376 retry.go:31] will retry after 1.23291322s: waiting for machine to come up
	I1218 22:40:20.228020   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:20.228416   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:20.228456   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:20.228342   14376 retry.go:31] will retry after 1.455640782s: waiting for machine to come up
	I1218 22:40:21.685770   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:21.686220   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:21.686248   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:21.686160   14376 retry.go:31] will retry after 2.202557094s: waiting for machine to come up
	I1218 22:40:23.890014   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:23.890469   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:23.890523   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:23.890406   14376 retry.go:31] will retry after 2.440138488s: waiting for machine to come up
	I1218 22:40:26.333958   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:26.334450   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:26.334477   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:26.334386   14376 retry.go:31] will retry after 2.840186568s: waiting for machine to come up
	I1218 22:40:29.176838   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:29.177155   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:29.177189   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:29.177136   14376 retry.go:31] will retry after 2.991125931s: waiting for machine to come up
	I1218 22:40:32.169353   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:32.169732   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find current IP address of domain addons-522125 in network mk-addons-522125
	I1218 22:40:32.169758   14354 main.go:141] libmachine: (addons-522125) DBG | I1218 22:40:32.169681   14376 retry.go:31] will retry after 5.157603659s: waiting for machine to come up
	I1218 22:40:37.331837   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.332309   14354 main.go:141] libmachine: (addons-522125) Found IP for machine: 192.168.39.206
	I1218 22:40:37.332341   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has current primary IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.332348   14354 main.go:141] libmachine: (addons-522125) Reserving static IP address...
	I1218 22:40:37.332692   14354 main.go:141] libmachine: (addons-522125) DBG | unable to find host DHCP lease matching {name: "addons-522125", mac: "52:54:00:9f:5c:80", ip: "192.168.39.206"} in network mk-addons-522125
	I1218 22:40:37.401156   14354 main.go:141] libmachine: (addons-522125) Reserved static IP address: 192.168.39.206
	I1218 22:40:37.401181   14354 main.go:141] libmachine: (addons-522125) DBG | Getting to WaitForSSH function...
	I1218 22:40:37.401192   14354 main.go:141] libmachine: (addons-522125) Waiting for SSH to be available...
	I1218 22:40:37.403685   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.404115   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:37.404140   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.404374   14354 main.go:141] libmachine: (addons-522125) DBG | Using SSH client type: external
	I1218 22:40:37.404404   14354 main.go:141] libmachine: (addons-522125) DBG | Using SSH private key: /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa (-rw-------)
	I1218 22:40:37.404441   14354 main.go:141] libmachine: (addons-522125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 22:40:37.404456   14354 main.go:141] libmachine: (addons-522125) DBG | About to run SSH command:
	I1218 22:40:37.404493   14354 main.go:141] libmachine: (addons-522125) DBG | exit 0
	I1218 22:40:37.504200   14354 main.go:141] libmachine: (addons-522125) DBG | SSH cmd err, output: <nil>: 
	I1218 22:40:37.504469   14354 main.go:141] libmachine: (addons-522125) KVM machine creation complete!
	I1218 22:40:37.504788   14354 main.go:141] libmachine: (addons-522125) Calling .GetConfigRaw
	I1218 22:40:37.505287   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:37.505473   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:37.505629   14354 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1218 22:40:37.505646   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:40:37.506789   14354 main.go:141] libmachine: Detecting operating system of created instance...
	I1218 22:40:37.506814   14354 main.go:141] libmachine: Waiting for SSH to be available...
	I1218 22:40:37.506823   14354 main.go:141] libmachine: Getting to WaitForSSH function...
	I1218 22:40:37.506834   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:37.508912   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.509237   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:37.509266   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.509448   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:37.509597   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.509714   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.509887   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:37.510032   14354 main.go:141] libmachine: Using SSH client type: native
	I1218 22:40:37.510512   14354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1218 22:40:37.510529   14354 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1218 22:40:37.635289   14354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 22:40:37.635338   14354 main.go:141] libmachine: Detecting the provisioner...
	I1218 22:40:37.635350   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:37.637989   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.638374   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:37.638397   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.638551   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:37.638733   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.638892   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.639020   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:37.639149   14354 main.go:141] libmachine: Using SSH client type: native
	I1218 22:40:37.639511   14354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1218 22:40:37.639525   14354 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1218 22:40:37.765159   14354 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1218 22:40:37.765272   14354 main.go:141] libmachine: found compatible host: buildroot
	I1218 22:40:37.765292   14354 main.go:141] libmachine: Provisioning with buildroot...
	I1218 22:40:37.765304   14354 main.go:141] libmachine: (addons-522125) Calling .GetMachineName
	I1218 22:40:37.765574   14354 buildroot.go:166] provisioning hostname "addons-522125"
	I1218 22:40:37.765600   14354 main.go:141] libmachine: (addons-522125) Calling .GetMachineName
	I1218 22:40:37.765802   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:37.768049   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.768386   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:37.768407   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.768545   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:37.768757   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.768933   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.769128   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:37.769322   14354 main.go:141] libmachine: Using SSH client type: native
	I1218 22:40:37.769624   14354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1218 22:40:37.769637   14354 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-522125 && echo "addons-522125" | sudo tee /etc/hostname
	I1218 22:40:37.905183   14354 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522125
	
	I1218 22:40:37.905230   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:37.907639   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.907978   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:37.908018   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:37.908145   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:37.908355   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.908554   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:37.908730   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:37.908966   14354 main.go:141] libmachine: Using SSH client type: native
	I1218 22:40:37.909352   14354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1218 22:40:37.909371   14354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-522125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-522125/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-522125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 22:40:38.041706   14354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 22:40:38.041734   14354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17822-6323/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-6323/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-6323/.minikube}
	I1218 22:40:38.041756   14354 buildroot.go:174] setting up certificates
	I1218 22:40:38.041793   14354 provision.go:83] configureAuth start
	I1218 22:40:38.041811   14354 main.go:141] libmachine: (addons-522125) Calling .GetMachineName
	I1218 22:40:38.042096   14354 main.go:141] libmachine: (addons-522125) Calling .GetIP
	I1218 22:40:38.044665   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.044934   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.044956   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.045096   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.047016   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.047313   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.047340   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.047438   14354 provision.go:138] copyHostCerts
	I1218 22:40:38.047515   14354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-6323/.minikube/cert.pem (1123 bytes)
	I1218 22:40:38.047682   14354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-6323/.minikube/key.pem (1679 bytes)
	I1218 22:40:38.047782   14354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-6323/.minikube/ca.pem (1082 bytes)
	I1218 22:40:38.047861   14354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-6323/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca-key.pem org=jenkins.addons-522125 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube addons-522125]
	I1218 22:40:38.237934   14354 provision.go:172] copyRemoteCerts
	I1218 22:40:38.237992   14354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 22:40:38.238020   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.240647   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.240974   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.241001   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.241164   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:38.241356   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.241531   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:38.241652   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:40:38.333599   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 22:40:38.355726   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 22:40:38.378980   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 22:40:38.400838   14354 provision.go:86] duration metric: configureAuth took 359.031584ms
	I1218 22:40:38.400866   14354 buildroot.go:189] setting minikube options for container-runtime
	I1218 22:40:38.401052   14354 config.go:182] Loaded profile config "addons-522125": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:40:38.401076   14354 main.go:141] libmachine: Checking connection to Docker...
	I1218 22:40:38.401090   14354 main.go:141] libmachine: (addons-522125) Calling .GetURL
	I1218 22:40:38.402283   14354 main.go:141] libmachine: (addons-522125) DBG | Using libvirt version 6000000
	I1218 22:40:38.404716   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.405013   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.405038   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.405193   14354 main.go:141] libmachine: Docker is up and running!
	I1218 22:40:38.405208   14354 main.go:141] libmachine: Reticulating splines...
	I1218 22:40:38.405215   14354 client.go:171] LocalClient.Create took 25.557774185s
	I1218 22:40:38.405240   14354 start.go:167] duration metric: libmachine.API.Create for "addons-522125" took 25.557845248s
	I1218 22:40:38.405252   14354 start.go:300] post-start starting for "addons-522125" (driver="kvm2")
	I1218 22:40:38.405267   14354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 22:40:38.405289   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:38.405539   14354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 22:40:38.405567   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.407624   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.408009   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.408042   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.408186   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:38.408433   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.408592   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:38.408736   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:40:38.501321   14354 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 22:40:38.505596   14354 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 22:40:38.505630   14354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-6323/.minikube/addons for local assets ...
	I1218 22:40:38.505693   14354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-6323/.minikube/files for local assets ...
	I1218 22:40:38.505723   14354 start.go:303] post-start completed in 100.460526ms
	I1218 22:40:38.505760   14354 main.go:141] libmachine: (addons-522125) Calling .GetConfigRaw
	I1218 22:40:38.506373   14354 main.go:141] libmachine: (addons-522125) Calling .GetIP
	I1218 22:40:38.508839   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.509099   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.509128   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.509361   14354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/config.json ...
	I1218 22:40:38.509507   14354 start.go:128] duration metric: createHost completed in 25.679321743s
	I1218 22:40:38.509528   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.511589   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.511857   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.511891   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.512028   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:38.512201   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.512341   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.512510   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:38.512671   14354 main.go:141] libmachine: Using SSH client type: native
	I1218 22:40:38.513018   14354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1218 22:40:38.513031   14354 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 22:40:38.636874   14354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702939238.619091235
	
	I1218 22:40:38.636898   14354 fix.go:206] guest clock: 1702939238.619091235
	I1218 22:40:38.636908   14354 fix.go:219] Guest: 2023-12-18 22:40:38.619091235 +0000 UTC Remote: 2023-12-18 22:40:38.509518315 +0000 UTC m=+25.793442345 (delta=109.57292ms)
	I1218 22:40:38.636931   14354 fix.go:190] guest clock delta is within tolerance: 109.57292ms
	I1218 22:40:38.636937   14354 start.go:83] releasing machines lock for "addons-522125", held for 25.806843268s
	I1218 22:40:38.636960   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:38.637225   14354 main.go:141] libmachine: (addons-522125) Calling .GetIP
	I1218 22:40:38.639624   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.639943   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.639974   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.640129   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:38.640592   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:38.640757   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:40:38.640864   14354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 22:40:38.640925   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.640937   14354 ssh_runner.go:195] Run: cat /version.json
	I1218 22:40:38.640959   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:40:38.643483   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.643592   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.643843   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.643889   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.643969   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:38.643996   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:38.644003   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:38.644193   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:40:38.644232   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.644373   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:38.644381   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:40:38.644509   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:40:38.644526   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:40:38.644627   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:40:38.869344   14354 ssh_runner.go:195] Run: systemctl --version
	I1218 22:40:38.874882   14354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 22:40:38.880083   14354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 22:40:38.880147   14354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 22:40:38.894100   14354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 22:40:38.894117   14354 start.go:475] detecting cgroup driver to use...
	I1218 22:40:38.894170   14354 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 22:40:38.928830   14354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 22:40:38.941312   14354 docker.go:203] disabling cri-docker service (if available) ...
	I1218 22:40:38.941383   14354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 22:40:38.954102   14354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 22:40:38.966773   14354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 22:40:39.071767   14354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 22:40:39.192571   14354 docker.go:219] disabling docker service ...
	I1218 22:40:39.192641   14354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 22:40:39.206489   14354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 22:40:39.218534   14354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 22:40:39.331748   14354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 22:40:39.439058   14354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 22:40:39.450941   14354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 22:40:39.467389   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 22:40:39.476867   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 22:40:39.486117   14354 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 22:40:39.486179   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 22:40:39.495612   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:40:39.505144   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 22:40:39.515183   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:40:39.524917   14354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 22:40:39.534813   14354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 22:40:39.544204   14354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 22:40:39.552716   14354 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1218 22:40:39.552764   14354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1218 22:40:39.564879   14354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 22:40:39.573434   14354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:40:39.674012   14354 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 22:40:39.703578   14354 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 22:40:39.703683   14354 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 22:40:39.708344   14354 retry.go:31] will retry after 1.457723697s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1218 22:40:41.167001   14354 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 22:40:41.172177   14354 start.go:543] Will wait 60s for crictl version
	I1218 22:40:41.172233   14354 ssh_runner.go:195] Run: which crictl
	I1218 22:40:41.175825   14354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 22:40:41.217420   14354 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I1218 22:40:41.217495   14354 ssh_runner.go:195] Run: containerd --version
	I1218 22:40:41.249241   14354 ssh_runner.go:195] Run: containerd --version
	I1218 22:40:41.279675   14354 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I1218 22:40:41.281226   14354 main.go:141] libmachine: (addons-522125) Calling .GetIP
	I1218 22:40:41.283825   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:41.284173   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:40:41.284203   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:40:41.284460   14354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1218 22:40:41.288116   14354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:40:41.299366   14354 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 22:40:41.299410   14354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 22:40:41.335405   14354 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1218 22:40:41.335459   14354 ssh_runner.go:195] Run: which lz4
	I1218 22:40:41.339153   14354 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 22:40:41.343123   14354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 22:40:41.343151   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I1218 22:40:43.077550   14354 containerd.go:547] Took 1.738441 seconds to copy over tarball
	I1218 22:40:43.077618   14354 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 22:40:46.027518   14354 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.949861298s)
	I1218 22:40:46.027545   14354 containerd.go:554] Took 2.949976 seconds to extract the tarball
	I1218 22:40:46.027563   14354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 22:40:46.068632   14354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:40:46.167355   14354 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 22:40:46.194560   14354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 22:40:46.239869   14354 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.4 registry.k8s.io/kube-controller-manager:v1.28.4 registry.k8s.io/kube-scheduler:v1.28.4 registry.k8s.io/kube-proxy:v1.28.4 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 22:40:46.239959   14354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:40:46.239980   14354 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 22:40:46.240000   14354 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.4
	I1218 22:40:46.240022   14354 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1218 22:40:46.240234   14354 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.4
	I1218 22:40:46.239969   14354 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.4
	I1218 22:40:46.240301   14354 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1218 22:40:46.240858   14354 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1218 22:40:46.241429   14354 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.4
	I1218 22:40:46.241432   14354 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.4
	I1218 22:40:46.241445   14354 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.4
	I1218 22:40:46.241435   14354 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 22:40:46.242049   14354 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1218 22:40:46.242078   14354 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1218 22:40:46.242217   14354 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1218 22:40:46.242682   14354 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:40:46.498406   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.28.4" and sha "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e"
	I1218 22:40:46.498467   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.585690   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/etcd:3.5.9-0" and sha "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9"
	I1218 22:40:46.585744   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.588995   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/pause:3.9" and sha "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"
	I1218 22:40:46.589043   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.595132   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.10.1" and sha "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"
	I1218 22:40:46.595196   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.598435   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.28.4" and sha "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"
	I1218 22:40:46.598493   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.630185   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.28.4" and sha "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"
	I1218 22:40:46.630236   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:46.648008   14354 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.28.4" and sha "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"
	I1218 22:40:46.648060   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:47.793312   14354 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.204245468s)
	I1218 22:40:47.793376   14354 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.19816676s)
	I1218 22:40:47.793713   14354 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.195207605s)
	I1218 22:40:47.845269   14354 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.215008132s)
	I1218 22:40:47.865326   14354 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.217243641s)
	I1218 22:40:48.159441   14354 containerd.go:251] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1218 22:40:48.159504   14354 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I1218 22:40:48.383408   14354 cache_images.go:123] Successfully loaded all cached images
	I1218 22:40:48.383427   14354 cache_images.go:92] LoadImages completed in 2.14353372s
	I1218 22:40:48.383489   14354 ssh_runner.go:195] Run: sudo crictl info
	I1218 22:40:48.418139   14354 cni.go:84] Creating CNI manager for ""
	I1218 22:40:48.418166   14354 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:40:48.418189   14354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 22:40:48.418213   14354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-522125 NodeName:addons-522125 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 22:40:48.418367   14354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-522125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 22:40:48.418450   14354 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-522125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-522125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 22:40:48.418512   14354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 22:40:48.427937   14354 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 22:40:48.428002   14354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 22:40:48.436190   14354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1218 22:40:48.452195   14354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 22:40:48.467379   14354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1218 22:40:48.483110   14354 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1218 22:40:48.486971   14354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:40:48.499016   14354 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125 for IP: 192.168.39.206
	I1218 22:40:48.499048   14354 certs.go:190] acquiring lock for shared ca certs: {Name:mk8b114a9af54f75059c7fcfd09cbb6860d26d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:48.499184   14354 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.key
	I1218 22:40:48.651418   14354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt ...
	I1218 22:40:48.651449   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt: {Name:mkbb9ba09b7e1b1c5c5f940aafa172171653e772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:48.651626   14354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-6323/.minikube/ca.key ...
	I1218 22:40:48.651641   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/ca.key: {Name:mk7025322ab148d0957f86a1aed617b520da314d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:48.651735   14354 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.key
	I1218 22:40:48.737609   14354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.crt ...
	I1218 22:40:48.737639   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.crt: {Name:mk2f65b6fb4a9b53d1d776f0ea5f9aaa8aed99ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:48.737808   14354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.key ...
	I1218 22:40:48.737823   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.key: {Name:mk255d626e87f6556aa8c2a99461e693bd09a2b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:48.737946   14354 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.key
	I1218 22:40:48.737962   14354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt with IP's: []
	I1218 22:40:49.332686   14354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt ...
	I1218 22:40:49.332739   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: {Name:mk974065c8d5d797b6d8b99b3ce9f9f34d4e0d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.332899   14354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.key ...
	I1218 22:40:49.332912   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.key: {Name:mk4dedfd7c6a0b3249500a8c7dc8cc50eeca2c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.333005   14354 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key.b548e89c
	I1218 22:40:49.333031   14354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 22:40:49.597447   14354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt.b548e89c ...
	I1218 22:40:49.597477   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt.b548e89c: {Name:mk14c82857e74ae54b2262fc69dc64de195d59e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.597641   14354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key.b548e89c ...
	I1218 22:40:49.597659   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key.b548e89c: {Name:mk246a29ec581c1c770180e3add299f66e5068e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.597748   14354 certs.go:337] copying /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt
	I1218 22:40:49.597935   14354 certs.go:341] copying /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key
	I1218 22:40:49.598047   14354 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.key
	I1218 22:40:49.598074   14354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.crt with IP's: []
	I1218 22:40:49.744550   14354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.crt ...
	I1218 22:40:49.744577   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.crt: {Name:mk5dfd8b2c4dddd99883bca900f685cadcbffac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.744745   14354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.key ...
	I1218 22:40:49.744760   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.key: {Name:mk6dce95bfb4f3a557d727d3a36c03499fe72226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:40:49.744959   14354 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca-key.pem (1679 bytes)
	I1218 22:40:49.745010   14354 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/home/jenkins/minikube-integration/17822-6323/.minikube/certs/ca.pem (1082 bytes)
	I1218 22:40:49.745044   14354 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/home/jenkins/minikube-integration/17822-6323/.minikube/certs/cert.pem (1123 bytes)
	I1218 22:40:49.745081   14354 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-6323/.minikube/certs/home/jenkins/minikube-integration/17822-6323/.minikube/certs/key.pem (1679 bytes)
	I1218 22:40:49.745665   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 22:40:49.772549   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 22:40:49.795600   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 22:40:49.818912   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 22:40:49.840927   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 22:40:49.862826   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 22:40:49.885564   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 22:40:49.907487   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 22:40:49.929243   14354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 22:40:49.951208   14354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 22:40:49.966332   14354 ssh_runner.go:195] Run: openssl version
	I1218 22:40:49.972095   14354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 22:40:49.981693   14354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:40:49.986227   14354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:40:49.986276   14354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:40:49.991589   14354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 22:40:50.000701   14354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 22:40:50.004714   14354 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 22:40:50.004763   14354 kubeadm.go:404] StartCluster: {Name:addons-522125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-522125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:40:50.004859   14354 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 22:40:50.004913   14354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 22:40:50.041994   14354 cri.go:89] found id: ""
	I1218 22:40:50.042069   14354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 22:40:50.050665   14354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 22:40:50.058859   14354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 22:40:50.067065   14354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 22:40:50.067112   14354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1218 22:40:50.255097   14354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 22:41:02.165585   14354 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 22:41:02.165650   14354 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 22:41:02.165747   14354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 22:41:02.165867   14354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 22:41:02.165985   14354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 22:41:02.166070   14354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 22:41:02.167753   14354 out.go:204]   - Generating certificates and keys ...
	I1218 22:41:02.167850   14354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 22:41:02.167944   14354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 22:41:02.168071   14354 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 22:41:02.168138   14354 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 22:41:02.168206   14354 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 22:41:02.168358   14354 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 22:41:02.168443   14354 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 22:41:02.168583   14354 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-522125 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1218 22:41:02.168651   14354 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 22:41:02.168809   14354 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-522125 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1218 22:41:02.168903   14354 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 22:41:02.168996   14354 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 22:41:02.169057   14354 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 22:41:02.169136   14354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 22:41:02.169211   14354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 22:41:02.169309   14354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 22:41:02.169382   14354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 22:41:02.169454   14354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 22:41:02.169538   14354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 22:41:02.169592   14354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 22:41:02.171999   14354 out.go:204]   - Booting up control plane ...
	I1218 22:41:02.172106   14354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 22:41:02.172170   14354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 22:41:02.172225   14354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 22:41:02.172359   14354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 22:41:02.172506   14354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 22:41:02.172548   14354 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 22:41:02.172747   14354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 22:41:02.172828   14354 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.507256 seconds
	I1218 22:41:02.172960   14354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 22:41:02.173135   14354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 22:41:02.173211   14354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 22:41:02.173401   14354 kubeadm.go:322] [mark-control-plane] Marking the node addons-522125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 22:41:02.173472   14354 kubeadm.go:322] [bootstrap-token] Using token: 5gi8jf.rbl3m95sbs2mkj32
	I1218 22:41:02.174845   14354 out.go:204]   - Configuring RBAC rules ...
	I1218 22:41:02.174969   14354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 22:41:02.175087   14354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 22:41:02.175237   14354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 22:41:02.175404   14354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 22:41:02.175573   14354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 22:41:02.175682   14354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 22:41:02.175851   14354 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 22:41:02.175914   14354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 22:41:02.175993   14354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 22:41:02.176007   14354 kubeadm.go:322] 
	I1218 22:41:02.176105   14354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 22:41:02.176116   14354 kubeadm.go:322] 
	I1218 22:41:02.176209   14354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 22:41:02.176223   14354 kubeadm.go:322] 
	I1218 22:41:02.176264   14354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 22:41:02.176362   14354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 22:41:02.176464   14354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 22:41:02.176482   14354 kubeadm.go:322] 
	I1218 22:41:02.176557   14354 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 22:41:02.176569   14354 kubeadm.go:322] 
	I1218 22:41:02.176647   14354 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 22:41:02.176656   14354 kubeadm.go:322] 
	I1218 22:41:02.176731   14354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 22:41:02.176827   14354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 22:41:02.176915   14354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 22:41:02.176928   14354 kubeadm.go:322] 
	I1218 22:41:02.177052   14354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 22:41:02.177156   14354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 22:41:02.177171   14354 kubeadm.go:322] 
	I1218 22:41:02.177283   14354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5gi8jf.rbl3m95sbs2mkj32 \
	I1218 22:41:02.177406   14354 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:42f2154bbeb4d43b951d7d07c43d54daad944ffae0bbf41596b175d20557d34b \
	I1218 22:41:02.177442   14354 kubeadm.go:322] 	--control-plane 
	I1218 22:41:02.177451   14354 kubeadm.go:322] 
	I1218 22:41:02.177571   14354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 22:41:02.177592   14354 kubeadm.go:322] 
	I1218 22:41:02.177699   14354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5gi8jf.rbl3m95sbs2mkj32 \
	I1218 22:41:02.177846   14354 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:42f2154bbeb4d43b951d7d07c43d54daad944ffae0bbf41596b175d20557d34b 
	I1218 22:41:02.177865   14354 cni.go:84] Creating CNI manager for ""
	I1218 22:41:02.177875   14354 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:41:02.179516   14354 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1218 22:41:02.180800   14354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1218 22:41:02.196337   14354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1218 22:41:02.225546   14354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 22:41:02.225628   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:02.225658   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=addons-522125 minikube.k8s.io/updated_at=2023_12_18T22_41_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:02.448338   14354 ops.go:34] apiserver oom_adj: -16
	I1218 22:41:02.448351   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:02.948879   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:03.448475   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:03.948967   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:04.449418   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:04.949266   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:05.448916   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:05.949433   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:06.448774   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:06.948411   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:07.448759   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:07.949205   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:08.448577   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:08.948736   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:09.448563   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:09.948891   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:10.449304   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:10.948528   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:11.448412   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:11.948437   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:12.449306   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:12.949379   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:13.448413   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:13.948569   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:14.448982   14354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:41:14.565872   14354 kubeadm.go:1088] duration metric: took 12.340313475s to wait for elevateKubeSystemPrivileges.
	I1218 22:41:14.565967   14354 kubeadm.go:406] StartCluster complete in 24.561207429s
	I1218 22:41:14.565988   14354 settings.go:142] acquiring lock: {Name:mkeaf153027347367f2c8b52f117de5ab735d131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:41:14.566097   14354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:41:14.567078   14354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/kubeconfig: {Name:mk73362fb619b048deddcae6f30fd918c2e03241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:41:14.567552   14354 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 22:41:14.567726   14354 addons.go:69] Setting volumesnapshots=true in profile "addons-522125"
	I1218 22:41:14.567748   14354 addons.go:231] Setting addon volumesnapshots=true in "addons-522125"
	I1218 22:41:14.567799   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.567952   14354 addons.go:69] Setting metrics-server=true in profile "addons-522125"
	I1218 22:41:14.567956   14354 addons.go:69] Setting ingress-dns=true in profile "addons-522125"
	I1218 22:41:14.567970   14354 addons.go:231] Setting addon metrics-server=true in "addons-522125"
	I1218 22:41:14.567973   14354 addons.go:69] Setting helm-tiller=true in profile "addons-522125"
	I1218 22:41:14.567992   14354 addons.go:69] Setting ingress=true in profile "addons-522125"
	I1218 22:41:14.568000   14354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 22:41:14.568038   14354 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-522125"
	I1218 22:41:14.568047   14354 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-522125"
	I1218 22:41:14.568055   14354 addons.go:69] Setting inspektor-gadget=true in profile "addons-522125"
	I1218 22:41:14.568060   14354 addons.go:69] Setting storage-provisioner=true in profile "addons-522125"
	I1218 22:41:14.568072   14354 addons.go:231] Setting addon inspektor-gadget=true in "addons-522125"
	I1218 22:41:14.568074   14354 addons.go:231] Setting addon storage-provisioner=true in "addons-522125"
	I1218 22:41:14.568094   14354 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-522125"
	I1218 22:41:14.568097   14354 addons.go:69] Setting cloud-spanner=true in profile "addons-522125"
	I1218 22:41:14.568133   14354 addons.go:231] Setting addon cloud-spanner=true in "addons-522125"
	I1218 22:41:14.568140   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568177   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568006   14354 addons.go:231] Setting addon helm-tiller=true in "addons-522125"
	I1218 22:41:14.568252   14354 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-522125"
	I1218 22:41:14.568267   14354 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-522125"
	I1218 22:41:14.568276   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568053   14354 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-522125"
	I1218 22:41:14.568499   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568545   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.568586   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.568117   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568666   14354 config.go:182] Loaded profile config "addons-522125": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:41:14.568971   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.568981   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.568996   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569003   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.569004   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569007   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.569028   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569031   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569064   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.568858   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.569095   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569098   14354 addons.go:69] Setting gcp-auth=true in profile "addons-522125"
	I1218 22:41:14.569105   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.568028   14354 addons.go:69] Setting registry=true in profile "addons-522125"
	I1218 22:41:14.569119   14354 mustload.go:65] Loading cluster: addons-522125
	I1218 22:41:14.569131   14354 addons.go:69] Setting default-storageclass=true in profile "addons-522125"
	I1218 22:41:14.569136   14354 addons.go:231] Setting addon registry=true in "addons-522125"
	I1218 22:41:14.568005   14354 addons.go:231] Setting addon ingress=true in "addons-522125"
	I1218 22:41:14.569142   14354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-522125"
	I1218 22:41:14.568117   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.567981   14354 addons.go:231] Setting addon ingress-dns=true in "addons-522125"
	I1218 22:41:14.569785   14354 config.go:182] Loaded profile config "addons-522125": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:41:14.569688   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.568007   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.570055   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.570086   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.570201   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.570219   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.570296   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.570328   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569576   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.571220   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.569607   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.569627   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.589691   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1218 22:41:14.589835   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I1218 22:41:14.589855   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I1218 22:41:14.589908   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I1218 22:41:14.590323   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.590419   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.591033   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.591109   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.591151   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.591168   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.591185   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1218 22:41:14.591452   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.591474   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.591585   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.591897   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.592319   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.592323   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.592351   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.592429   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.592707   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.592773   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.592821   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.592865   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.592937   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.593310   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.593354   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.593327   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.593409   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.593359   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.593828   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.596878   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.596912   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.597008   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.597027   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.597267   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.597280   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.597337   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.597901   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.597939   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.603710   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.605466   14354 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-522125"
	I1218 22:41:14.605511   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.606003   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.606036   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.625024   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I1218 22:41:14.625268   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1218 22:41:14.625278   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I1218 22:41:14.625706   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.626143   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.626237   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I1218 22:41:14.626331   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.626349   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.627330   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.627482   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.627502   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.627512   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.627545   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.627831   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.627982   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.627993   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.628047   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.628193   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.628697   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.628714   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.628891   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.629266   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.629951   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.629976   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.630159   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.631736   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.634172   14354 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 22:41:14.633039   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.633150   14354 addons.go:231] Setting addon default-storageclass=true in "addons-522125"
	I1218 22:41:14.634072   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I1218 22:41:14.635744   14354 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 22:41:14.635757   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 22:41:14.635778   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.636180   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.636215   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.636343   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:14.636715   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.636748   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.637267   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.637778   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.637800   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.638701   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.638936   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.638987   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I1218 22:41:14.639509   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.639669   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I1218 22:41:14.639811   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.640077   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.640254   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.640270   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.640282   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.640435   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.640493   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.640714   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.642828   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 22:41:14.640854   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.640891   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.640901   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.642452   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I1218 22:41:14.644418   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.644532   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.644878   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 22:41:14.644896   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 22:41:14.644915   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.645074   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.645092   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.645144   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.645145   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.645174   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I1218 22:41:14.645185   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.645311   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.645327   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.645809   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.646370   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.646414   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.646927   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I1218 22:41:14.646917   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.647216   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.647730   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.647809   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.647823   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.648152   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.648167   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.648225   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.648240   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.650163   14354 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1218 22:41:14.648772   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.648808   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.648912   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1218 22:41:14.649821   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1218 22:41:14.650666   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.651576   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.651604   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.651701   14354 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1218 22:41:14.651711   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1218 22:41:14.651725   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.651796   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.651108   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.653405   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.653460   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.653918   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.653940   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.653986   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I1218 22:41:14.654029   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.654230   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.654411   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.654988   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.655157   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.655170   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.655301   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.655312   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.655373   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.656092   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.656108   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.656169   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.656212   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.656267   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.656281   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.656337   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.656368   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.656386   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.656486   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.656593   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.657021   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.657196   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.657222   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.658205   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.660198   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 22:41:14.658599   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.661523   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.662891   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 22:41:14.664568   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 22:41:14.664951   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I1218 22:41:14.666175   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 22:41:14.666508   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.666874   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1218 22:41:14.667945   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 22:41:14.668334   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.668684   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.670025   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.670032   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 22:41:14.670762   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.671660   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 22:41:14.671683   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.670793   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.673402   14354 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 22:41:14.675090   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 22:41:14.675108   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 22:41:14.675125   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.673969   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.674054   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.675219   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.675014   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1218 22:41:14.675839   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.676419   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.676437   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.676791   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.676970   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.677656   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:14.677695   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:14.679066   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.679573   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.679603   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.679829   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.679997   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.680149   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.680197   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.680502   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.682854   14354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 22:41:14.682004   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1218 22:41:14.682519   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46865
	I1218 22:41:14.684621   14354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:41:14.685910   14354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:41:14.685231   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.687571   14354 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 22:41:14.687592   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 22:41:14.687613   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.685272   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.686374   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.687705   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.688013   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.688148   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.688160   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.688359   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.688708   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.688912   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.690784   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1218 22:41:14.690909   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I1218 22:41:14.691311   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.691395   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.691477   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.691539   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1218 22:41:14.693415   14354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:41:14.692158   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.692312   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.692508   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.692605   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.693612   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.694153   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.695129   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.695163   14354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:41:14.695168   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.695174   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 22:41:14.695189   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.695237   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.695267   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.695724   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.695798   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.697447   14354 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 22:41:14.696126   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.696147   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.696151   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.696250   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I1218 22:41:14.696464   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.697813   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I1218 22:41:14.698956   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.699007   14354 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 22:41:14.699075   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.699231   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 22:41:14.699254   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.699819   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.699822   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.699878   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.699893   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.699892   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.699916   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.700454   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.701065   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.701132   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.701198   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.701239   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1218 22:41:14.702577   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.702672   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.702677   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.702692   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.702714   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.704427   14354 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 22:41:14.702895   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.703078   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.704536   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.703161   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.704589   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.703193   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.703216   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.703841   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.704250   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.706217   14354 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 22:41:14.706229   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 22:41:14.704713   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.706240   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.706254   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.707597   14354 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 22:41:14.704925   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.704946   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.705051   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.705210   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
	I1218 22:41:14.705324   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.708264   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.708917   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.708953   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.708991   14354 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 22:41:14.709006   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 22:41:14.709022   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.708678   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.709341   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.709362   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.709381   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.709583   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:14.709608   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.709627   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.709780   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.710614   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.710622   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:14.710641   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:14.710984   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:14.711226   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:14.712409   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.712421   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.714421   14354 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 22:41:14.712678   14354 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 22:41:14.713242   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.713271   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.713687   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.713952   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:14.716136   14354 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 22:41:14.716143   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 22:41:14.716154   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.716174   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 22:41:14.716181   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.716216   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.716235   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.718057   14354 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 22:41:14.716410   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.718950   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.719348   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.719376   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.719797   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.719799   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.721176   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.721192   14354 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 22:41:14.721213   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.721331   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.724020   14354 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 22:41:14.722511   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.722520   14354 out.go:177]   - Using image docker.io/busybox:stable
	I1218 22:41:14.722523   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.722674   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.722696   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.722684   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.726987   14354 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 22:41:14.727007   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 22:41:14.727018   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.725581   14354 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 22:41:14.727078   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 22:41:14.727097   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:14.725672   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.725749   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.727334   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.730038   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.730885   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.730888   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.730909   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.730921   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.731043   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.731193   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.731420   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:14.731425   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:14.731457   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:14.731593   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:14.731749   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:14.731874   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:14.731984   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:15.063393   14354 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 22:41:15.063414   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 22:41:15.074312   14354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-522125" context rescaled to 1 replicas
	I1218 22:41:15.074355   14354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 22:41:15.077685   14354 out.go:177] * Verifying Kubernetes components...
	I1218 22:41:15.079433   14354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:41:15.247487   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 22:41:15.339796   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 22:41:15.410461   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 22:41:15.410486   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 22:41:15.475009   14354 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 22:41:15.475039   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 22:41:15.491424   14354 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1218 22:41:15.491445   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1218 22:41:15.535323   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 22:41:15.535444   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 22:41:15.551512   14354 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 22:41:15.551539   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 22:41:15.595491   14354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 22:41:15.595519   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 22:41:15.627983   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 22:41:15.641964   14354 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 22:41:15.641993   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 22:41:15.750782   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:41:15.755107   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 22:41:15.762994   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 22:41:15.763018   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 22:41:15.834051   14354 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 22:41:15.834076   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 22:41:15.857092   14354 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1218 22:41:15.857120   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1218 22:41:15.861061   14354 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 22:41:15.861084   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 22:41:15.891047   14354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.322988185s)
	I1218 22:41:15.891202   14354 node_ready.go:35] waiting up to 6m0s for node "addons-522125" to be "Ready" ...
	I1218 22:41:15.891216   14354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 22:41:15.895544   14354 node_ready.go:49] node "addons-522125" has status "Ready":"True"
	I1218 22:41:15.895564   14354 node_ready.go:38] duration metric: took 4.341418ms waiting for node "addons-522125" to be "Ready" ...
	I1218 22:41:15.895574   14354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:41:15.907332   14354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8stxk" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:15.947124   14354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 22:41:15.947144   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 22:41:16.005117   14354 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 22:41:16.005136   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 22:41:16.039747   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 22:41:16.039771   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 22:41:16.098540   14354 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 22:41:16.098575   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 22:41:16.101365   14354 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 22:41:16.101389   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 22:41:16.257752   14354 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 22:41:16.257774   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 22:41:16.322336   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 22:41:16.322361   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 22:41:16.330772   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1218 22:41:16.424594   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 22:41:16.424620   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 22:41:16.459345   14354 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 22:41:16.459365   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 22:41:16.471794   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 22:41:16.527748   14354 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 22:41:16.527790   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 22:41:16.658220   14354 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:41:16.658247   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 22:41:16.692750   14354 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 22:41:16.692776   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 22:41:16.743094   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 22:41:16.864343   14354 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 22:41:16.864367   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 22:41:17.010910   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:41:17.032584   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 22:41:17.032614   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 22:41:17.279395   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 22:41:17.423699   14354 pod_ready.go:92] pod "coredns-5dd5756b68-8stxk" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:17.423720   14354 pod_ready.go:81] duration metric: took 1.516365024s waiting for pod "coredns-5dd5756b68-8stxk" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.423728   14354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wxg6p" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.426208   14354 pod_ready.go:97] error getting pod "coredns-5dd5756b68-wxg6p" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-wxg6p" not found
	I1218 22:41:17.426236   14354 pod_ready.go:81] duration metric: took 2.500212ms waiting for pod "coredns-5dd5756b68-wxg6p" in "kube-system" namespace to be "Ready" ...
	E1218 22:41:17.426248   14354 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-wxg6p" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-wxg6p" not found
	I1218 22:41:17.426259   14354 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.434459   14354 pod_ready.go:92] pod "etcd-addons-522125" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:17.434489   14354 pod_ready.go:81] duration metric: took 8.221483ms waiting for pod "etcd-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.434500   14354 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.443507   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 22:41:17.443523   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 22:41:17.450630   14354 pod_ready.go:92] pod "kube-apiserver-addons-522125" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:17.450651   14354 pod_ready.go:81] duration metric: took 16.143786ms waiting for pod "kube-apiserver-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.450663   14354 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.456161   14354 pod_ready.go:92] pod "kube-controller-manager-addons-522125" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:17.456182   14354 pod_ready.go:81] duration metric: took 5.510748ms waiting for pod "kube-controller-manager-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.456193   14354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xdqsq" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.731490   14354 pod_ready.go:92] pod "kube-proxy-xdqsq" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:17.731512   14354 pod_ready.go:81] duration metric: took 275.310888ms waiting for pod "kube-proxy-xdqsq" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.731524   14354 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:17.927821   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 22:41:17.927844   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 22:41:18.095200   14354 pod_ready.go:92] pod "kube-scheduler-addons-522125" in "kube-system" namespace has status "Ready":"True"
	I1218 22:41:18.095224   14354 pod_ready.go:81] duration metric: took 363.69254ms waiting for pod "kube-scheduler-addons-522125" in "kube-system" namespace to be "Ready" ...
	I1218 22:41:18.095235   14354 pod_ready.go:38] duration metric: took 2.199649724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:41:18.095254   14354 api_server.go:52] waiting for apiserver process to appear ...
	I1218 22:41:18.095309   14354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 22:41:18.184453   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 22:41:18.184474   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 22:41:18.447250   14354 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 22:41:18.447273   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 22:41:18.574608   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 22:41:18.872998   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.625470632s)
	I1218 22:41:18.873047   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:18.873060   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:18.873375   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:18.873402   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:18.873436   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:18.873452   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:18.873462   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:18.873698   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:18.873711   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:18.873719   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:21.310327   14354 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 22:41:21.310366   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:21.314091   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:21.314550   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:21.314572   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:21.314737   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:21.314960   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:21.315177   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:21.315420   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:22.278044   14354 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 22:41:22.454082   14354 addons.go:231] Setting addon gcp-auth=true in "addons-522125"
	I1218 22:41:22.454151   14354 host.go:66] Checking if "addons-522125" exists ...
	I1218 22:41:22.454573   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:22.454617   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:22.469733   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35863
	I1218 22:41:22.470196   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:22.470684   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:22.470709   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:22.471020   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:22.471593   14354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:41:22.471642   14354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:41:22.486111   14354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43963
	I1218 22:41:22.486532   14354 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:41:22.487389   14354 main.go:141] libmachine: Using API Version  1
	I1218 22:41:22.487413   14354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:41:22.487692   14354 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:41:22.487885   14354 main.go:141] libmachine: (addons-522125) Calling .GetState
	I1218 22:41:22.489402   14354 main.go:141] libmachine: (addons-522125) Calling .DriverName
	I1218 22:41:22.489630   14354 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 22:41:22.489650   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHHostname
	I1218 22:41:22.492506   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:22.492970   14354 main.go:141] libmachine: (addons-522125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:5c:80", ip: ""} in network mk-addons-522125: {Iface:virbr1 ExpiryTime:2023-12-18 23:40:29 +0000 UTC Type:0 Mac:52:54:00:9f:5c:80 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-522125 Clientid:01:52:54:00:9f:5c:80}
	I1218 22:41:22.493002   14354 main.go:141] libmachine: (addons-522125) DBG | domain addons-522125 has defined IP address 192.168.39.206 and MAC address 52:54:00:9f:5c:80 in network mk-addons-522125
	I1218 22:41:22.493148   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHPort
	I1218 22:41:22.493331   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHKeyPath
	I1218 22:41:22.493502   14354 main.go:141] libmachine: (addons-522125) Calling .GetSSHUsername
	I1218 22:41:22.493644   14354 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/addons-522125/id_rsa Username:docker}
	I1218 22:41:25.767861   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.428011452s)
	I1218 22:41:25.767920   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.767938   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.767939   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.232578715s)
	I1218 22:41:25.767966   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.232498514s)
	I1218 22:41:25.767977   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.767988   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768002   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.767993   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768082   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.017276139s)
	I1218 22:41:25.768096   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768110   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768020   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.140010131s)
	I1218 22:41:25.768130   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768140   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768183   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.013053302s)
	I1218 22:41:25.768201   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768209   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768209   14354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.87697655s)
	I1218 22:41:25.768228   14354 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1218 22:41:25.768268   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.437469906s)
	I1218 22:41:25.768292   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768297   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.296467872s)
	I1218 22:41:25.768303   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768310   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768320   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768406   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.025278562s)
	I1218 22:41:25.768434   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768447   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768471   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768480   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768493   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768503   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.768521   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.768530   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768538   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768558   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.768569   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.768577   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768581   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.757641851s)
	I1218 22:41:25.768586   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	W1218 22:41:25.768605   14354 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 22:41:25.768624   14354 retry.go:31] will retry after 295.749525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 22:41:25.768630   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768650   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.768661   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.768670   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768677   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768690   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.489263474s)
	I1218 22:41:25.768708   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.768721   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.768797   14354 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.673472423s)
	I1218 22:41:25.768824   14354 api_server.go:72] duration metric: took 10.694442557s to wait for apiserver process to appear ...
	I1218 22:41:25.768837   14354 api_server.go:88] waiting for apiserver healthz status ...
	I1218 22:41:25.768842   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768861   14354 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1218 22:41:25.768903   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.768938   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.768952   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.769172   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.769199   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.769216   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.769232   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.769285   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.769314   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.769339   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.769356   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.769373   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.769598   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.769620   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.769639   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.769647   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771335   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.771365   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.771373   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771510   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.771530   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.771538   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771546   14354 addons.go:467] Verifying addon metrics-server=true in "addons-522125"
	I1218 22:41:25.771655   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.771719   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.771741   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771769   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.771787   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.771830   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.771846   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771855   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.771857   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.771864   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.771880   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.771888   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.771897   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.771905   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.771988   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.772001   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.772024   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.772044   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.772511   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.772528   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.772540   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.772555   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.773760   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.773775   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.773791   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.773806   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.773808   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.773816   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.773816   14354 addons.go:467] Verifying addon registry=true in "addons-522125"
	I1218 22:41:25.775783   14354 out.go:177] * Verifying registry addon...
	I1218 22:41:25.773927   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.774103   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.774119   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.774129   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.774165   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.774166   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.774168   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.774185   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.775878   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.775892   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.775902   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.775902   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.777342   14354 addons.go:467] Verifying addon ingress=true in "addons-522125"
	I1218 22:41:25.778893   14354 out.go:177] * Verifying ingress addon...
	I1218 22:41:25.777926   14354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 22:41:25.780974   14354 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 22:41:25.787835   14354 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 22:41:25.787855   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:25.789842   14354 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1218 22:41:25.791565   14354 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 22:41:25.791578   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:25.794964   14354 api_server.go:141] control plane version: v1.28.4
	I1218 22:41:25.794984   14354 api_server.go:131] duration metric: took 26.134513ms to wait for apiserver health ...
	I1218 22:41:25.794992   14354 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 22:41:25.814448   14354 system_pods.go:59] 15 kube-system pods found
	I1218 22:41:25.814474   14354 system_pods.go:61] "coredns-5dd5756b68-8stxk" [0ae6f767-a93c-4231-9e80-da8f9ee56ea1] Running
	I1218 22:41:25.814479   14354 system_pods.go:61] "etcd-addons-522125" [bdc34b94-60a0-4097-95e6-79a559a41d4a] Running
	I1218 22:41:25.814483   14354 system_pods.go:61] "kube-apiserver-addons-522125" [db1c7d9b-b6da-49a0-9997-0617896225b8] Running
	I1218 22:41:25.814487   14354 system_pods.go:61] "kube-controller-manager-addons-522125" [12155231-08ca-45ba-8ead-f85314995682] Running
	I1218 22:41:25.814494   14354 system_pods.go:61] "kube-ingress-dns-minikube" [25704338-1fea-4d6f-8d92-3248aa65f692] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 22:41:25.814512   14354 system_pods.go:61] "kube-proxy-xdqsq" [fbb661a5-a229-4a26-a271-61902585fdf8] Running
	I1218 22:41:25.814520   14354 system_pods.go:61] "kube-scheduler-addons-522125" [2b4fabf6-a9fe-406d-a947-b6f1cbd8818e] Running
	I1218 22:41:25.814528   14354 system_pods.go:61] "metrics-server-7c66d45ddc-98lx4" [e8b44252-6b91-46f8-aa31-02c97367111f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 22:41:25.814553   14354 system_pods.go:61] "nvidia-device-plugin-daemonset-pgwhq" [b1d07de4-f656-4c4d-9bf0-e7c14b3514a4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 22:41:25.814565   14354 system_pods.go:61] "registry-mdc25" [af8d151e-3121-43f3-8722-7b99e5d5a1c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 22:41:25.814573   14354 system_pods.go:61] "registry-proxy-h4tds" [e6b6af54-7a7a-4493-8ad0-7b28fdde2c62] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 22:41:25.814583   14354 system_pods.go:61] "snapshot-controller-58dbcc7b99-866jc" [8188f3a2-739d-48f4-962e-8de1cb8e6777] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:41:25.814593   14354 system_pods.go:61] "snapshot-controller-58dbcc7b99-qd8p7" [485bf70f-eb6f-40d3-a67b-61c3e865f8b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:41:25.814598   14354 system_pods.go:61] "storage-provisioner" [19060590-1b1c-4e51-bef9-f405a59b6e77] Running
	I1218 22:41:25.814604   14354 system_pods.go:61] "tiller-deploy-7b677967b9-mwq2f" [cb75981a-563b-4f1f-88bf-5848ade2b8ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1218 22:41:25.814614   14354 system_pods.go:74] duration metric: took 19.617115ms to wait for pod list to return data ...
	I1218 22:41:25.814627   14354 default_sa.go:34] waiting for default service account to be created ...
	I1218 22:41:25.826351   14354 default_sa.go:45] found service account: "default"
	I1218 22:41:25.826372   14354 default_sa.go:55] duration metric: took 11.736487ms for default service account to be created ...
	I1218 22:41:25.826380   14354 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 22:41:25.827278   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.827293   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.827555   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.827594   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.827604   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:25.831532   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:25.831548   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:25.831775   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:25.831814   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:25.831827   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	W1218 22:41:25.831907   14354 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1218 22:41:25.838969   14354 system_pods.go:86] 15 kube-system pods found
	I1218 22:41:25.838991   14354 system_pods.go:89] "coredns-5dd5756b68-8stxk" [0ae6f767-a93c-4231-9e80-da8f9ee56ea1] Running
	I1218 22:41:25.838996   14354 system_pods.go:89] "etcd-addons-522125" [bdc34b94-60a0-4097-95e6-79a559a41d4a] Running
	I1218 22:41:25.839001   14354 system_pods.go:89] "kube-apiserver-addons-522125" [db1c7d9b-b6da-49a0-9997-0617896225b8] Running
	I1218 22:41:25.839005   14354 system_pods.go:89] "kube-controller-manager-addons-522125" [12155231-08ca-45ba-8ead-f85314995682] Running
	I1218 22:41:25.839012   14354 system_pods.go:89] "kube-ingress-dns-minikube" [25704338-1fea-4d6f-8d92-3248aa65f692] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 22:41:25.839019   14354 system_pods.go:89] "kube-proxy-xdqsq" [fbb661a5-a229-4a26-a271-61902585fdf8] Running
	I1218 22:41:25.839026   14354 system_pods.go:89] "kube-scheduler-addons-522125" [2b4fabf6-a9fe-406d-a947-b6f1cbd8818e] Running
	I1218 22:41:25.839034   14354 system_pods.go:89] "metrics-server-7c66d45ddc-98lx4" [e8b44252-6b91-46f8-aa31-02c97367111f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 22:41:25.839046   14354 system_pods.go:89] "nvidia-device-plugin-daemonset-pgwhq" [b1d07de4-f656-4c4d-9bf0-e7c14b3514a4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1218 22:41:25.839062   14354 system_pods.go:89] "registry-mdc25" [af8d151e-3121-43f3-8722-7b99e5d5a1c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 22:41:25.839075   14354 system_pods.go:89] "registry-proxy-h4tds" [e6b6af54-7a7a-4493-8ad0-7b28fdde2c62] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 22:41:25.839081   14354 system_pods.go:89] "snapshot-controller-58dbcc7b99-866jc" [8188f3a2-739d-48f4-962e-8de1cb8e6777] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:41:25.839091   14354 system_pods.go:89] "snapshot-controller-58dbcc7b99-qd8p7" [485bf70f-eb6f-40d3-a67b-61c3e865f8b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:41:25.839098   14354 system_pods.go:89] "storage-provisioner" [19060590-1b1c-4e51-bef9-f405a59b6e77] Running
	I1218 22:41:25.839104   14354 system_pods.go:89] "tiller-deploy-7b677967b9-mwq2f" [cb75981a-563b-4f1f-88bf-5848ade2b8ff] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1218 22:41:25.839113   14354 system_pods.go:126] duration metric: took 12.727638ms to wait for k8s-apps to be running ...
	I1218 22:41:25.839128   14354 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 22:41:25.839178   14354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:41:26.064983   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:41:26.285479   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:26.289089   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:26.807083   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:26.833130   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:27.300659   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:27.300820   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:27.792342   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:27.793816   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:28.259737   14354 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.770084682s)
	I1218 22:41:28.259778   14354 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420573362s)
	I1218 22:41:28.259805   14354 system_svc.go:56] duration metric: took 2.420680206s WaitForService to wait for kubelet.
	I1218 22:41:28.259813   14354 kubeadm.go:581] duration metric: took 13.185433304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 22:41:28.259844   14354 node_conditions.go:102] verifying NodePressure condition ...
	I1218 22:41:28.261412   14354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:41:28.263629   14354 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 22:41:28.264949   14354 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 22:41:28.264965   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 22:41:28.265175   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.690525011s)
	I1218 22:41:28.265215   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:28.265230   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:28.265455   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:28.265503   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:28.265505   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:28.265531   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:28.265540   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:28.265785   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:28.265806   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:28.265819   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:28.265837   14354 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-522125"
	I1218 22:41:28.267386   14354 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 22:41:28.269401   14354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 22:41:28.271403   14354 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 22:41:28.271424   14354 node_conditions.go:123] node cpu capacity is 2
	I1218 22:41:28.271432   14354 node_conditions.go:105] duration metric: took 11.579159ms to run NodePressure ...
	I1218 22:41:28.271442   14354 start.go:228] waiting for startup goroutines ...
	I1218 22:41:28.332334   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:28.337500   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:28.338669   14354 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 22:41:28.338684   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:28.376807   14354 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 22:41:28.376829   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 22:41:28.531319   14354 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 22:41:28.531338   14354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 22:41:28.604919   14354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 22:41:28.779743   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:28.785653   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:28.787100   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:29.276299   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:29.287508   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:29.288884   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:29.741475   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.676437851s)
	I1218 22:41:29.741520   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:29.741531   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:29.741890   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:29.741942   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:29.741951   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:29.741969   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:29.741978   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:29.742201   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:29.742217   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:29.776048   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:29.787956   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:29.788177   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:30.275741   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:30.286167   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:30.288887   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:30.591254   14354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.986302006s)
	I1218 22:41:30.591291   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:30.591303   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:30.591564   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:30.591578   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:30.591587   14354 main.go:141] libmachine: Making call to close driver server
	I1218 22:41:30.591595   14354 main.go:141] libmachine: (addons-522125) Calling .Close
	I1218 22:41:30.591836   14354 main.go:141] libmachine: Successfully made call to close driver server
	I1218 22:41:30.591857   14354 main.go:141] libmachine: Making call to close connection to plugin binary
	I1218 22:41:30.591862   14354 main.go:141] libmachine: (addons-522125) DBG | Closing plugin on server side
	I1218 22:41:30.594063   14354 addons.go:467] Verifying addon gcp-auth=true in "addons-522125"
	I1218 22:41:30.595956   14354 out.go:177] * Verifying gcp-auth addon...
	I1218 22:41:30.598553   14354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 22:41:30.604819   14354 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 22:41:30.604841   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:30.775317   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:30.784668   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:30.786148   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:31.102648   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:31.275426   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:31.285637   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:31.287515   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:31.602838   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:31.774808   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:31.787228   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:31.787606   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:32.102983   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:32.275169   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:32.285682   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:32.287763   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:32.602219   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:32.775242   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:32.785658   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:32.788204   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:33.103655   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:33.275287   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:33.284345   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:33.287232   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:33.602429   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:33.775640   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:33.787192   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:33.787250   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:34.103603   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:34.662763   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:34.667658   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:34.668565   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:34.668580   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:34.775396   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:34.785194   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:34.786812   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:35.102905   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:35.276074   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:35.287026   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:35.287107   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:35.602534   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:35.778235   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:35.786128   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:35.786459   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:36.104467   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:36.277228   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:36.285266   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:36.285752   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:36.602662   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:36.777986   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:36.785531   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:36.785800   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:37.102108   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:37.275797   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:37.286934   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:37.289795   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:37.603215   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:37.777479   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:37.785171   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:37.786037   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:38.102988   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:38.275968   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:38.288803   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:38.295113   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:38.604474   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:38.781617   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:38.786436   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:38.788615   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:39.103179   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:39.275529   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:39.284841   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:39.286613   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:39.604273   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:39.775959   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:39.784832   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:39.785735   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:40.102537   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:40.274986   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:40.286085   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:40.287952   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:40.602890   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:40.775624   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:40.785486   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:40.785698   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:41.103521   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:41.274802   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:41.285275   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:41.288223   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:41.605443   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:41.775811   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:41.787029   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:41.787361   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:42.105764   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:42.274826   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:42.286927   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:42.287018   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:42.602784   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:42.775733   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:42.785893   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:42.786993   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:43.102863   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:43.275431   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:43.284630   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:43.285571   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:43.603010   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:43.775333   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:43.785707   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:43.787666   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:44.103187   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:44.275217   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:44.285182   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:44.286877   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:44.602944   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:44.775173   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:44.795136   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:44.799331   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:45.103154   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:45.276300   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:45.286872   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:45.287322   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:45.603717   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:45.775738   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:45.784671   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:45.785576   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:46.103008   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:46.561431   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:46.561597   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:46.563058   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:46.602825   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:46.774911   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:46.786154   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:46.788654   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:47.102935   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:47.275908   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:47.290293   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:47.290635   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:47.602860   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:47.774903   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:47.786070   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:47.788509   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:48.103844   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:48.278647   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:48.285870   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:48.286339   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:48.603421   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:48.774831   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:48.786789   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:48.787265   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:49.102159   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:49.275624   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:49.286980   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:49.287288   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:49.602593   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:49.775047   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:49.785620   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:49.787231   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:50.102935   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:50.276144   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:50.286220   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:50.288209   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:50.602485   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:50.778339   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:50.787565   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:50.787877   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:51.102720   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:51.277499   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:51.286938   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:51.287100   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:51.604704   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:51.779087   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:51.784761   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:51.789840   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:52.102750   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:52.277944   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:52.285981   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:52.288119   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:52.603096   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:52.777149   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:52.790499   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:52.791774   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:53.102688   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:53.279873   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:53.286123   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:53.287631   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:53.603067   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:53.776143   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:53.785705   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:53.787113   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:54.102792   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:54.276487   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:54.286130   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:54.288741   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:54.603447   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:54.775998   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:54.788064   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:54.790650   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:55.103172   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:55.276088   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:55.286951   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:55.287338   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:55.603393   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:55.777124   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:55.785515   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:55.785697   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:56.103721   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:56.278195   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:56.295208   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:56.295545   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:56.603307   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:56.776229   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:56.784907   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:56.785275   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:57.103142   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:57.275711   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:57.285909   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:57.286123   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:57.603219   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:57.775889   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:57.785273   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:57.785464   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:58.102675   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:58.275290   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:58.284873   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:58.287054   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:58.603511   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:58.775775   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:58.786804   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:58.786966   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:59.103118   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:59.293398   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:59.296317   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:59.298079   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:41:59.605448   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:41:59.776278   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:41:59.784661   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:41:59.785499   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:00.103735   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:00.274669   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:00.284867   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:00.285162   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:00.602631   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:00.779767   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:00.786608   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:00.788037   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:01.103042   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:01.275873   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:01.288932   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:01.294273   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:01.602809   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:01.777655   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:01.785983   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:01.788234   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:02.102893   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:02.277304   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:02.286787   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:02.288223   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:02.603886   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:02.779619   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:02.788782   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:02.789556   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:03.103730   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:03.276557   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:03.289507   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:03.290252   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:03.603672   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:03.775580   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:03.785416   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:03.786727   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:04.102033   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:04.279558   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:04.284793   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:04.287541   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:04.603916   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:04.775763   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:04.787354   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:04.787982   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:05.103237   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:05.277579   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:05.290509   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:05.291908   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:05.602509   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:05.778040   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:05.789696   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:05.789822   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:06.107210   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:06.276827   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:06.291209   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:06.295846   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:06.604123   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:06.775861   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:06.786162   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:06.787550   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:07.103245   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:07.275940   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:07.285599   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:07.286882   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:07.603047   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:07.776239   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:07.785884   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:07.786290   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:08.103939   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:08.277296   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:08.286464   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:08.288328   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:08.603013   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:08.778427   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:08.787794   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:08.787817   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:09.102116   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:09.275247   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:09.284894   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:09.287842   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:09.603143   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:09.776192   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:09.787321   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:09.793107   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:10.102748   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:10.275501   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:10.284862   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:10.287608   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:10.602767   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:10.774969   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:10.786953   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:10.787095   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:11.103278   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:11.275526   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:11.287204   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:42:11.287691   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:11.603137   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:11.775690   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:11.784963   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:11.785821   14354 kapi.go:107] duration metric: took 46.007893907s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 22:42:12.102588   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:12.274524   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:12.285847   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:12.603928   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:12.776152   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:12.785481   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:13.103742   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:13.276334   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:13.285431   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:13.602761   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:13.774842   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:13.784754   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:14.103402   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:14.276183   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:14.288128   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:14.604979   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:14.775409   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:14.785737   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:15.102858   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:15.275762   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:15.286802   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:15.603085   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:15.775288   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:15.785130   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:16.103727   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:16.275107   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:16.287117   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:16.604536   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:16.774965   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:16.786738   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:17.107535   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:17.279775   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:17.286338   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:17.604238   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:17.776951   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:17.785085   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:18.103250   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:18.275320   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:18.285121   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:18.602685   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:18.776171   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:18.785995   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:19.103873   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:19.275312   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:19.286528   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:19.603323   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:19.775963   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:19.784866   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:20.103100   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:20.277517   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:20.285792   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:20.608681   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:20.775182   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:20.787939   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:21.103468   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:21.276431   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:21.285133   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:21.604034   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:21.776178   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:21.953081   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:22.102984   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:22.275009   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:22.285368   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:22.603197   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:22.778849   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:22.785088   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:23.102468   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:23.275845   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:23.285406   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:23.602686   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:23.775678   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:23.786281   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:24.102359   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:24.434706   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:24.441778   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:24.603380   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:24.779939   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:24.785400   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:25.102975   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:25.275488   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:25.285717   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:25.602920   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:25.775422   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:25.786321   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:26.102483   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:26.275523   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:26.286075   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:26.711450   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:26.775947   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:26.786027   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:27.103846   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:27.275682   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:27.287595   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:27.603503   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:27.775634   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:27.785880   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:28.102951   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:28.275557   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:28.286948   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:28.603425   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:28.775784   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:28.784907   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:29.103179   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:29.453723   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:29.453936   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:29.603234   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:29.775999   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:29.785186   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:30.102601   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:30.277701   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:30.285713   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:30.603641   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:30.787947   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:30.788560   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:31.103138   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:31.276069   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:31.288750   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:31.607087   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:31.775903   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:31.785720   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:32.103456   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:32.277711   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:32.286256   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:32.602367   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:32.775515   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:32.786109   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:33.102789   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:33.275680   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:33.285369   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:33.603007   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:33.776430   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:33.785245   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:34.102934   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:34.276222   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:34.285255   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:34.603371   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:34.776364   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:34.786292   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:35.102741   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:35.275145   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:35.285585   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:35.602589   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:36.214683   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:36.215744   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:36.216204   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:36.275976   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:36.285203   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:36.603717   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:36.775411   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:36.785276   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:37.102854   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:37.274706   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:37.286047   14354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:42:37.602890   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:37.775388   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:37.789929   14354 kapi.go:107] duration metric: took 1m12.008951243s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 22:42:38.104714   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:38.275057   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:38.604452   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:38.775760   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:39.102895   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:39.275240   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:39.603014   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:39.777485   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:40.103793   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:40.275886   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:40.603873   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:40.775508   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:41.102759   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:41.275506   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:41.602978   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:41.776326   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:42.102394   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:42.276837   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:42.603243   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:42.775791   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:43.104203   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:43.276118   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:43.603342   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:42:43.777617   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:44.108775   14354 kapi.go:107] duration metric: took 1m13.510220403s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 22:42:44.110976   14354 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-522125 cluster.
	I1218 22:42:44.112534   14354 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 22:42:44.113878   14354 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 22:42:44.275968   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:44.776206   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:45.485084   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:45.775711   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:46.275488   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:46.776840   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:47.275836   14354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:42:47.776225   14354 kapi.go:107] duration metric: took 1m19.506822388s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 22:42:47.778094   14354 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, metrics-server, ingress-dns, inspektor-gadget, helm-tiller, cloud-spanner, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1218 22:42:47.779779   14354 addons.go:502] enable addons completed in 1m33.21223263s: enabled=[nvidia-device-plugin storage-provisioner metrics-server ingress-dns inspektor-gadget helm-tiller cloud-spanner default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1218 22:42:47.779821   14354 start.go:233] waiting for cluster config update ...
	I1218 22:42:47.779845   14354 start.go:242] writing updated cluster config ...
	I1218 22:42:47.780082   14354 ssh_runner.go:195] Run: rm -f paused
	I1218 22:42:47.832956   14354 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 22:42:47.836255   14354 out.go:177] * Done! kubectl is now configured to use "addons-522125" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d6d71d970aa7d       a416a98b71e22       3 seconds ago        Exited              helper-pod                               0                   f9aec002a94de       helper-pod-delete-pvc-5df8efa3-be39-4112-b71b-2a19c24d1a6e
	c8af6814615f2       beae173ccac6a       4 seconds ago        Exited              registry-test                            0                   b0280dab5882b       registry-test
	7e9e6a87cc504       f5fb98afcf9f5       7 seconds ago        Exited              busybox                                  0                   d39eea5179704       test-local-path
	59f3aa7bf282a       738351fd438f0       20 seconds ago       Running             csi-snapshotter                          0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	d43049fd3fa03       931dbfd16f87c       22 seconds ago       Running             csi-provisioner                          0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	7fcfbd4b0140c       e899260153aed       24 seconds ago       Running             liveness-probe                           0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	920041786b4f3       6d2a98b274382       24 seconds ago       Running             gcp-auth                                 0                   7848ebbb435cd       gcp-auth-d4c87556c-mgfm2
	0017e05ba3151       e255e073c508c       29 seconds ago       Running             hostpath                                 0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	4d48e79207053       5aa0bf4798fa2       30 seconds ago       Running             controller                               0                   c2865b86c5465       ingress-nginx-controller-7c6974c4d8-cj4g5
	e12de51fba43c       88ef14a257f42       36 seconds ago       Running             node-driver-registrar                    0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	87c79e845a125       1ebff0f9671bc       38 seconds ago       Exited              patch                                    0                   9556b67f490b9       gcp-auth-certs-patch-c7fc6
	089b059e25aea       1ebff0f9671bc       38 seconds ago       Exited              create                                   0                   5384d6a826cc0       gcp-auth-certs-create-svcwr
	889bfb730e7d7       a1ed5895ba635       38 seconds ago       Running             csi-external-health-monitor-controller   0                   2ef3005a5ade8       csi-hostpathplugin-dv9bz
	6a2c5fcec811f       59cbb42146a37       40 seconds ago       Running             csi-attacher                             0                   61ae768bfc934       csi-hostpath-attacher-0
	2a491c136d64a       19a639eda60f0       41 seconds ago       Running             csi-resizer                              0                   4fa90552f09de       csi-hostpath-resizer-0
	855fcf9280799       1ebff0f9671bc       43 seconds ago       Exited              patch                                    0                   40aa261840852       ingress-nginx-admission-patch-8tfnc
	2f4c60318ae14       1ebff0f9671bc       43 seconds ago       Exited              create                                   0                   fc1fc7d30e9c0       ingress-nginx-admission-create-j8qx9
	70035f67f35b5       aa61ee9c70bc4       45 seconds ago       Running             volume-snapshot-controller               0                   5a7cb68c02091       snapshot-controller-58dbcc7b99-866jc
	7320bde2a3f20       aa61ee9c70bc4       45 seconds ago       Running             volume-snapshot-controller               0                   b01a1e5c051a9       snapshot-controller-58dbcc7b99-qd8p7
	8ddb8e400c0ba       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   3247d03b24d60       local-path-provisioner-78b46b4d5c-5lrb7
	bac74863cb3d7       3f39089e90831       About a minute ago   Running             tiller                                   0                   f9eee80cd5fd8       tiller-deploy-7b677967b9-mwq2f
	3044e7ea28bbd       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   220ba986a5c06       kube-ingress-dns-minikube
	291620eed6438       e41cf323c46dd       About a minute ago   Running             cloud-spanner-emulator                   0                   8c7adeac3cd72       cloud-spanner-emulator-5649c69bf6-4wtd9
	764ec01b05490       8cfc3f994a82b       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   feaf038e113a0       nvidia-device-plugin-daemonset-pgwhq
	9c928c7e3cf5e       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   28473c560eb57       storage-provisioner
	3b35022d7f3f3       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   81ee6ed467975       coredns-5dd5756b68-8stxk
	cae5844719988       83f6cc407eed8       About a minute ago   Running             kube-proxy                               0                   554fa7ce42fc6       kube-proxy-xdqsq
	7ea083d61aa06       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   33d2b4e8f393a       etcd-addons-522125
	305a65fdd1886       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler                           0                   d7c21e4205892       kube-scheduler-addons-522125
	4afe1b38007de       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager                  0                   3ca211a8f2496       kube-controller-manager-addons-522125
	bc1bb7a6acd28       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver                           0                   964cf489a1fb3       kube-apiserver-addons-522125
	
	* 
	* ==> containerd <==
	* -- Journal begins at Mon 2023-12-18 22:40:26 UTC, ends at Mon 2023-12-18 22:43:07 UTC. --
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.014803633Z" level=info msg="shim disconnected" id=1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06 namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.015023711Z" level=warning msg="cleaning up after shim disconnected" id=1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06 namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.015113557Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.054917409Z" level=warning msg="cleanup warnings time=\"2023-12-18T22:43:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.064404531Z" level=info msg="StopContainer for \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.065475337Z" level=info msg="StopPodSandbox for \"9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c\""
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.065614298Z" level=info msg="Container to stop \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.124585419Z" level=info msg="shim disconnected" id=b01132555d1c9415610f952ffb80866e08357ae701935a4def388f0a4eb7b363 namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.124629254Z" level=warning msg="cleaning up after shim disconnected" id=b01132555d1c9415610f952ffb80866e08357ae701935a4def388f0a4eb7b363 namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.124638038Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.214891160Z" level=info msg="shim disconnected" id=9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.215034550Z" level=warning msg="cleaning up after shim disconnected" id=9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.215154686Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.239794167Z" level=info msg="TearDown network for sandbox \"f9aec002a94de3954d24f9309c7e6cdcb2c1d3a346b95c7d5503c2ea0ab3857c\" successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.239854575Z" level=info msg="StopPodSandbox for \"f9aec002a94de3954d24f9309c7e6cdcb2c1d3a346b95c7d5503c2ea0ab3857c\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.300130272Z" level=info msg="TearDown network for sandbox \"b01132555d1c9415610f952ffb80866e08357ae701935a4def388f0a4eb7b363\" successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.300281109Z" level=info msg="StopPodSandbox for \"b01132555d1c9415610f952ffb80866e08357ae701935a4def388f0a4eb7b363\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.399517101Z" level=info msg="TearDown network for sandbox \"9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c\" successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.399577171Z" level=info msg="StopPodSandbox for \"9acb78e5bd8db4347f42f51c914ce6a34625763346b5f2382996dea56293ab1c\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.830362082Z" level=info msg="RemoveContainer for \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\""
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.849825307Z" level=info msg="RemoveContainer for \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.854220263Z" level=error msg="ContainerStatus for \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\": not found"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.874941915Z" level=info msg="RemoveContainer for \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\""
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.894864958Z" level=info msg="RemoveContainer for \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\" returns successfully"
	Dec 18 22:43:06 addons-522125 containerd[688]: time="2023-12-18T22:43:06.901077845Z" level=error msg="ContainerStatus for \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\": not found"
	
	* 
	* ==> coredns [3b35022d7f3f37c0fcd745e49d682d7fbd28c452176957b39ce07478c4c33a5a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34345 - 23996 "HINFO IN 2331850616139783503.2456631964299333613. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010100296s
	[INFO] 10.244.0.20:53083 - 26943 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000219011s
	[INFO] 10.244.0.20:51737 - 52275 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00060213s
	[INFO] 10.244.0.20:56739 - 33507 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142655s
	[INFO] 10.244.0.20:53277 - 20132 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073507s
	[INFO] 10.244.0.20:53189 - 52397 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089237s
	[INFO] 10.244.0.20:44737 - 28965 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000204098s
	[INFO] 10.244.0.20:54948 - 12588 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000649521s
	[INFO] 10.244.0.20:37676 - 47561 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000698319s
	[INFO] 10.244.0.23:51252 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000288526s
	[INFO] 10.244.0.23:58604 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-522125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-522125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=addons-522125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T22_41_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-522125
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-522125"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 22:40:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-522125
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 22:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 22:43:04 +0000   Mon, 18 Dec 2023 22:40:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 22:43:04 +0000   Mon, 18 Dec 2023 22:40:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 22:43:04 +0000   Mon, 18 Dec 2023 22:40:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 22:43:04 +0000   Mon, 18 Dec 2023 22:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    addons-522125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 886a32846a2541fe890a81d218327da9
	  System UUID:                886a3284-6a25-41fe-890a-81d218327da9
	  Boot ID:                    8124e5f6-a58e-41f1-a81b-3a19f3b7085e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-4wtd9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  gcp-auth                    gcp-auth-d4c87556c-mgfm2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-cj4g5    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         102s
	  kube-system                 coredns-5dd5756b68-8stxk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     113s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 csi-hostpathplugin-dv9bz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 etcd-addons-522125                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-apiserver-addons-522125                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-controller-manager-addons-522125        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-xdqsq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-scheduler-addons-522125                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 nvidia-device-plugin-daemonset-pgwhq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 snapshot-controller-58dbcc7b99-866jc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 snapshot-controller-58dbcc7b99-qd8p7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 tiller-deploy-7b677967b9-mwq2f               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  local-path-storage          local-path-provisioner-78b46b4d5c-5lrb7      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x2 over 2m14s)  kubelet          Node addons-522125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x2 over 2m14s)  kubelet          Node addons-522125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x2 over 2m14s)  kubelet          Node addons-522125 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node addons-522125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node addons-522125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node addons-522125 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m5s                   kubelet          Node addons-522125 status is now: NodeReady
	  Normal  RegisteredNode           114s                   node-controller  Node addons-522125 event: Registered Node addons-522125 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094967] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.432482] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.512263] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152592] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.031638] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.112778] systemd-fstab-generator[556]: Ignoring "noauto" for root device
	[  +0.110671] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.148104] systemd-fstab-generator[580]: Ignoring "noauto" for root device
	[  +0.110665] systemd-fstab-generator[591]: Ignoring "noauto" for root device
	[  +0.234364] systemd-fstab-generator[618]: Ignoring "noauto" for root device
	[  +6.489733] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +6.979227] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[Dec18 22:41] systemd-fstab-generator[1243]: Ignoring "noauto" for root device
	[ +19.505998] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.422405] kauditd_printk_skb: 50 callbacks suppressed
	[ +16.484469] kauditd_printk_skb: 35 callbacks suppressed
	[Dec18 22:42] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.020696] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.876889] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.149507] kauditd_printk_skb: 26 callbacks suppressed
	[Dec18 22:43] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [7ea083d61aa0619aea833be91dad56867efc0c1471c51ace6ccd9abab50fa1ef] <==
	* {"level":"warn","ts":"2023-12-18T22:42:29.444275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.614482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81974"}
	{"level":"info","ts":"2023-12-18T22:42:29.444329Z","caller":"traceutil/trace.go:171","msg":"trace[1044331011] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1013; }","duration":"174.674558ms","start":"2023-12-18T22:42:29.269645Z","end":"2023-12-18T22:42:29.44432Z","steps":["trace[1044331011] 'agreement among raft nodes before linearized reading'  (duration: 174.429143ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:29.444303Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.806318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13782"}
	{"level":"info","ts":"2023-12-18T22:42:29.444416Z","caller":"traceutil/trace.go:171","msg":"trace[113190760] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1013; }","duration":"163.925616ms","start":"2023-12-18T22:42:29.280482Z","end":"2023-12-18T22:42:29.444408Z","steps":["trace[113190760] 'agreement among raft nodes before linearized reading'  (duration: 163.762495ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:36.206154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.841517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81974"}
	{"level":"info","ts":"2023-12-18T22:42:36.2062Z","caller":"traceutil/trace.go:171","msg":"trace[7030314] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1055; }","duration":"436.911706ms","start":"2023-12-18T22:42:35.76928Z","end":"2023-12-18T22:42:36.206191Z","steps":["trace[7030314] 'range keys from in-memory index tree'  (duration: 436.65761ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:36.206232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T22:42:35.769263Z","time spent":"436.95771ms","remote":"127.0.0.1:58826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81997,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2023-12-18T22:42:36.20643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.328429ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T22:42:36.206456Z","caller":"traceutil/trace.go:171","msg":"trace[1200794001] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1055; }","duration":"433.370732ms","start":"2023-12-18T22:42:35.773077Z","end":"2023-12-18T22:42:36.206448Z","steps":["trace[1200794001] 'range keys from in-memory index tree'  (duration: 433.290484ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:36.206624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.286791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13782"}
	{"level":"info","ts":"2023-12-18T22:42:36.206642Z","caller":"traceutil/trace.go:171","msg":"trace[782423444] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1055; }","duration":"425.307894ms","start":"2023-12-18T22:42:35.781329Z","end":"2023-12-18T22:42:36.206637Z","steps":["trace[782423444] 'range keys from in-memory index tree'  (duration: 425.197959ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:36.20666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T22:42:35.781314Z","time spent":"425.34193ms","remote":"127.0.0.1:58826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13805,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2023-12-18T22:42:36.206832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.802337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-18T22:42:36.206866Z","caller":"traceutil/trace.go:171","msg":"trace[1324748316] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1055; }","duration":"331.84356ms","start":"2023-12-18T22:42:35.875013Z","end":"2023-12-18T22:42:36.206857Z","steps":["trace[1324748316] 'count revisions from in-memory index tree'  (duration: 331.734906ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:36.206886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T22:42:35.875001Z","time spent":"331.880596ms","remote":"127.0.0.1:58862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":25,"response size":30,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true "}
	{"level":"warn","ts":"2023-12-18T22:42:36.207063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.870813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10869"}
	{"level":"info","ts":"2023-12-18T22:42:36.207089Z","caller":"traceutil/trace.go:171","msg":"trace[77657739] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1055; }","duration":"107.897689ms","start":"2023-12-18T22:42:36.099187Z","end":"2023-12-18T22:42:36.207084Z","steps":["trace[77657739] 'range keys from in-memory index tree'  (duration: 107.790722ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T22:42:45.477415Z","caller":"traceutil/trace.go:171","msg":"trace[907724640] linearizableReadLoop","detail":"{readStateIndex:1140; appliedIndex:1139; }","duration":"316.211993ms","start":"2023-12-18T22:42:45.161189Z","end":"2023-12-18T22:42:45.477401Z","steps":["trace[907724640] 'read index received'  (duration: 316.027189ms)","trace[907724640] 'applied index is now lower than readState.Index'  (duration: 184.228µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T22:42:45.477509Z","caller":"traceutil/trace.go:171","msg":"trace[291405525] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"420.068035ms","start":"2023-12-18T22:42:45.05742Z","end":"2023-12-18T22:42:45.477488Z","steps":["trace[291405525] 'process raft request'  (duration: 419.810383ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:45.477556Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.364768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T22:42:45.477578Z","caller":"traceutil/trace.go:171","msg":"trace[329241388] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1108; }","duration":"316.40789ms","start":"2023-12-18T22:42:45.161163Z","end":"2023-12-18T22:42:45.477571Z","steps":["trace[329241388] 'agreement among raft nodes before linearized reading'  (duration: 316.322038ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T22:42:45.477629Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T22:42:45.057399Z","time spent":"420.149668ms","remote":"127.0.0.1:58844","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-522125\" mod_revision:1053 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-522125\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-522125\" > >"}
	{"level":"warn","ts":"2023-12-18T22:42:45.47766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T22:42:45.161149Z","time spent":"316.504029ms","remote":"127.0.0.1:58782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-18T22:42:45.478033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.738306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81974"}
	{"level":"info","ts":"2023-12-18T22:42:45.478059Z","caller":"traceutil/trace.go:171","msg":"trace[1401571785] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1108; }","duration":"208.769136ms","start":"2023-12-18T22:42:45.269284Z","end":"2023-12-18T22:42:45.478053Z","steps":["trace[1401571785] 'agreement among raft nodes before linearized reading'  (duration: 208.553716ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [920041786b4f344b37d47ef5d8363f44bb862e61db1f1b788606e244b12e7e1d] <==
	* 2023/12/18 22:42:42 GCP Auth Webhook started!
	2023/12/18 22:42:48 Ready to marshal response ...
	2023/12/18 22:42:48 Ready to write response ...
	2023/12/18 22:42:48 Ready to marshal response ...
	2023/12/18 22:42:48 Ready to write response ...
	2023/12/18 22:42:58 Ready to marshal response ...
	2023/12/18 22:42:58 Ready to write response ...
	2023/12/18 22:43:02 Ready to marshal response ...
	2023/12/18 22:43:02 Ready to write response ...
	2023/12/18 22:43:03 Ready to marshal response ...
	2023/12/18 22:43:03 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:43:08 up 2 min,  0 users,  load average: 2.93, 1.59, 0.63
	Linux addons-522125 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bc1bb7a6acd286e4984072e9f4cf8d32f62bd6e8ba8ac997c0870d29b81d83f0] <==
	* I1218 22:41:25.374416       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.106.255.254"}
	I1218 22:41:25.410580       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.109.205.90"}
	I1218 22:41:25.484560       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W1218 22:41:26.703827       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 22:41:27.800278       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.146.37"}
	I1218 22:41:27.824661       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1218 22:41:28.109979       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.90.201"}
	W1218 22:41:29.418686       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 22:41:30.357853       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.199.42"}
	I1218 22:41:58.784190       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1218 22:42:07.327311       1 handler_proxy.go:93] no RequestInfo found in the context
	E1218 22:42:07.327398       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1218 22:42:07.327922       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1218 22:42:07.328396       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.206.248:443: connect: connection refused
	E1218 22:42:07.329329       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.206.248:443: connect: connection refused
	E1218 22:42:07.334250       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.206.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.206.248:443: connect: connection refused
	I1218 22:42:07.409671       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1218 22:43:00.354705       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1218 22:43:00.365994       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1218 22:43:01.395254       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1218 22:43:03.426568       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1218 22:43:03.653250       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.146.135"}
	
	* 
	* ==> kube-controller-manager [4afe1b38007de3490feb9b29573f6cb34020ba355282306ca97aeb17d87a155f] <==
	* I1218 22:42:32.598084       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1218 22:42:32.616144       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1218 22:42:37.629518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="64.303µs"
	I1218 22:42:38.461604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="11.53256ms"
	I1218 22:42:38.462487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="58.002µs"
	I1218 22:42:43.671012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="16.099229ms"
	I1218 22:42:43.671080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="42.955µs"
	I1218 22:42:48.018175       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1218 22:42:48.043662       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 22:42:48.044543       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 22:42:48.185332       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 22:42:49.778720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="12.76889ms"
	I1218 22:42:49.779024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="83.156µs"
	I1218 22:42:53.516693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="4.066µs"
	I1218 22:42:58.900012       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1218 22:43:01.018253       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1218 22:43:01.058598       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	E1218 22:43:01.398355       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:43:02.012707       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1218 22:43:02.056958       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	W1218 22:43:02.281028       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:43:02.281107       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 22:43:04.756137       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:43:04.756166       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:43:05.734117       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="6.522µs"
	
	* 
	* ==> kube-proxy [cae58447199889b38bd869f292531cf50de98d4503597566def581d1762bd1d9] <==
	* I1218 22:41:15.511375       1 server_others.go:69] "Using iptables proxy"
	I1218 22:41:15.526440       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1218 22:41:15.602722       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 22:41:15.602934       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 22:41:15.607692       1 server_others.go:152] "Using iptables Proxier"
	I1218 22:41:15.607860       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 22:41:15.608007       1 server.go:846] "Version info" version="v1.28.4"
	I1218 22:41:15.608047       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 22:41:15.611686       1 config.go:188] "Starting service config controller"
	I1218 22:41:15.611724       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 22:41:15.611830       1 config.go:97] "Starting endpoint slice config controller"
	I1218 22:41:15.611837       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 22:41:15.616586       1 config.go:315] "Starting node config controller"
	I1218 22:41:15.616688       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 22:41:15.712259       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 22:41:15.712324       1 shared_informer.go:318] Caches are synced for service config
	I1218 22:41:15.716931       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [305a65fdd1886389ada3c7db940da82dba5546315e40f71dd7c5673d4a8969bc] <==
	* W1218 22:40:58.898087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 22:40:58.898136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 22:40:58.898266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 22:40:58.898406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1218 22:40:59.723950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 22:40:59.724056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 22:40:59.853286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 22:40:59.853339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1218 22:40:59.939591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 22:40:59.939957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 22:40:59.963676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 22:40:59.963725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1218 22:41:00.031898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 22:41:00.032284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1218 22:41:00.066323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 22:41:00.066694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 22:41:00.084569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 22:41:00.085021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 22:41:00.102184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 22:41:00.102561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 22:41:00.201682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 22:41:00.202026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 22:41:00.360175       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 22:41:00.360487       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1218 22:41:03.262213       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 22:40:26 UTC, ends at Mon 2023-12-18 22:43:08 UTC. --
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.306576    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8" (UID: "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.307086    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-script" (OuterVolumeSpecName: "script") pod "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8" (UID: "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.313726    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-kube-api-access-8npsx" (OuterVolumeSpecName: "kube-api-access-8npsx") pod "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8" (UID: "2b76cd95-c3b3-4b11-a59f-40aedd42a6e8"). InnerVolumeSpecName "kube-api-access-8npsx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.406353    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4khtv\" (UniqueName: \"kubernetes.io/projected/af8d151e-3121-43f3-8722-7b99e5d5a1c5-kube-api-access-4khtv\") pod \"af8d151e-3121-43f3-8722-7b99e5d5a1c5\" (UID: \"af8d151e-3121-43f3-8722-7b99e5d5a1c5\") "
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.406702    1250 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-gcp-creds\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.406724    1250 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-script\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.406841    1250 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-data\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.406855    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8npsx\" (UniqueName: \"kubernetes.io/projected/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8-kube-api-access-8npsx\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.420513    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8d151e-3121-43f3-8722-7b99e5d5a1c5-kube-api-access-4khtv" (OuterVolumeSpecName: "kube-api-access-4khtv") pod "af8d151e-3121-43f3-8722-7b99e5d5a1c5" (UID: "af8d151e-3121-43f3-8722-7b99e5d5a1c5"). InnerVolumeSpecName "kube-api-access-4khtv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.507392    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlghk\" (UniqueName: \"kubernetes.io/projected/e6b6af54-7a7a-4493-8ad0-7b28fdde2c62-kube-api-access-nlghk\") pod \"e6b6af54-7a7a-4493-8ad0-7b28fdde2c62\" (UID: \"e6b6af54-7a7a-4493-8ad0-7b28fdde2c62\") "
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.507993    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4khtv\" (UniqueName: \"kubernetes.io/projected/af8d151e-3121-43f3-8722-7b99e5d5a1c5-kube-api-access-4khtv\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.519608    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b6af54-7a7a-4493-8ad0-7b28fdde2c62-kube-api-access-nlghk" (OuterVolumeSpecName: "kube-api-access-nlghk") pod "e6b6af54-7a7a-4493-8ad0-7b28fdde2c62" (UID: "e6b6af54-7a7a-4493-8ad0-7b28fdde2c62"). InnerVolumeSpecName "kube-api-access-nlghk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.608918    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nlghk\" (UniqueName: \"kubernetes.io/projected/e6b6af54-7a7a-4493-8ad0-7b28fdde2c62-kube-api-access-nlghk\") on node \"addons-522125\" DevicePath \"\""
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.823305    1250 scope.go:117] "RemoveContainer" containerID="cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.834684    1250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9aec002a94de3954d24f9309c7e6cdcb2c1d3a346b95c7d5503c2ea0ab3857c"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.851981    1250 scope.go:117] "RemoveContainer" containerID="cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: E1218 22:43:06.855979    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\": not found" containerID="cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.856387    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea"} err="failed to get container status \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc49b78a2a316822af76347acfd57b217b191c3fe1f8d278320148cf8e022aea\": not found"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.856703    1250 scope.go:117] "RemoveContainer" containerID="1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.896954    1250 scope.go:117] "RemoveContainer" containerID="1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: E1218 22:43:06.901356    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\": not found" containerID="1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06"
	Dec 18 22:43:06 addons-522125 kubelet[1250]: I1218 22:43:06.901392    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06"} err="failed to get container status \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c2a7e1e571db1a8626281f66cd5764bc6a14d5a3c30b7e0857ad85bc293da06\": not found"
	Dec 18 22:43:08 addons-522125 kubelet[1250]: I1218 22:43:08.154334    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2b76cd95-c3b3-4b11-a59f-40aedd42a6e8" path="/var/lib/kubelet/pods/2b76cd95-c3b3-4b11-a59f-40aedd42a6e8/volumes"
	Dec 18 22:43:08 addons-522125 kubelet[1250]: I1218 22:43:08.156075    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="af8d151e-3121-43f3-8722-7b99e5d5a1c5" path="/var/lib/kubelet/pods/af8d151e-3121-43f3-8722-7b99e5d5a1c5/volumes"
	Dec 18 22:43:08 addons-522125 kubelet[1250]: I1218 22:43:08.162190    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e6b6af54-7a7a-4493-8ad0-7b28fdde2c62" path="/var/lib/kubelet/pods/e6b6af54-7a7a-4493-8ad0-7b28fdde2c62/volumes"
	
	* 
	* ==> storage-provisioner [9c928c7e3cf5e53b9a52d782895bbe4287099e27aa162d091b3474d978efd6d0] <==
	* I1218 22:41:25.943363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 22:41:26.220468       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 22:41:26.220582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 22:41:26.245591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 22:41:26.248651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96993b11-3dbd-4149-8e57-ac11531807e7", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-522125_91f9dd3e-e293-455f-bb49-b7762c9a9d58 became leader
	I1218 22:41:26.248681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-522125_91f9dd3e-e293-455f-bb49-b7762c9a9d58!
	I1218 22:41:26.348881       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-522125_91f9dd3e-e293-455f-bb49-b7762c9a9d58!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-522125 -n addons-522125
helpers_test.go:261: (dbg) Run:  kubectl --context addons-522125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-j8qx9 ingress-nginx-admission-patch-8tfnc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-522125 describe pod nginx ingress-nginx-admission-create-j8qx9 ingress-nginx-admission-patch-8tfnc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-522125 describe pod nginx ingress-nginx-admission-create-j8qx9 ingress-nginx-admission-patch-8tfnc: exit status 1 (73.470656ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-522125/192.168.39.206
	Start Time:       Mon, 18 Dec 2023 22:43:03 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvsmh (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-vvsmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/nginx to addons-522125
	  Normal  Pulling    5s    kubelet            Pulling image "docker.io/nginx:alpine"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/nginx:alpine" in 4.563s (4.563s including waiting)

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-j8qx9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8tfnc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-522125 describe pod nginx ingress-nginx-admission-create-j8qx9 ingress-nginx-admission-patch-8tfnc: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.12s)

                                                
                                    

Test pass (273/313)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 67.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 51.16
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 49.28
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.58
27 TestOffline 103.61
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 155.13
34 TestAddons/parallel/Registry 18.07
35 TestAddons/parallel/Ingress 23.84
36 TestAddons/parallel/InspektorGadget 12.02
37 TestAddons/parallel/MetricsServer 5.83
38 TestAddons/parallel/HelmTiller 15.46
40 TestAddons/parallel/CSI 98.61
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 15.29
44 TestAddons/parallel/NvidiaDevicePlugin 5.52
47 TestAddons/serial/GCPAuth/Namespaces 0.12
48 TestAddons/StoppedEnableDisable 92.09
49 TestCertOptions 61.1
50 TestCertExpiration 300.06
52 TestForceSystemdFlag 104
53 TestForceSystemdEnv 56.32
55 TestKVMDriverInstallOrUpdate 8.33
59 TestErrorSpam/setup 47.37
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.54
63 TestErrorSpam/unpause 1.71
64 TestErrorSpam/stop 1.56
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 99.4
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.04
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
76 TestFunctional/serial/CacheCmd/cache/add_local 2.9
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 43.39
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.52
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 3.58
90 TestFunctional/parallel/ConfigCmd 0.39
91 TestFunctional/parallel/DashboardCmd 15.28
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.77
98 TestFunctional/parallel/ServiceCmdConnect 22.56
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 50.24
102 TestFunctional/parallel/SSHCmd 0.5
103 TestFunctional/parallel/CpCmd 1.46
104 TestFunctional/parallel/MySQL 28.78
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.32
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.84
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.69
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.36
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.73
122 TestFunctional/parallel/ImageCommands/Setup 2.66
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
136 TestFunctional/parallel/ProfileCmd/profile_list 0.3
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.71
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.1
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.37
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.69
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.26
145 TestFunctional/parallel/ServiceCmd/DeployApp 7.41
146 TestFunctional/parallel/MountCmd/any-port 9.54
147 TestFunctional/parallel/ServiceCmd/List 1.25
148 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
150 TestFunctional/parallel/ServiceCmd/Format 0.32
151 TestFunctional/parallel/MountCmd/specific-port 2.01
152 TestFunctional/parallel/ServiceCmd/URL 0.32
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestIngressAddonLegacy/StartLegacyK8sCluster 92.67
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.93
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 49.51
167 TestJSONOutput/start/Command 105.57
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.65
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.61
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.1
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 102.05
199 TestMountStart/serial/StartWithMountFirst 30.39
200 TestMountStart/serial/VerifyMountFirst 0.38
201 TestMountStart/serial/StartWithMountSecond 29.2
202 TestMountStart/serial/VerifyMountSecond 0.39
203 TestMountStart/serial/DeleteFirst 0.9
204 TestMountStart/serial/VerifyMountPostDelete 0.39
205 TestMountStart/serial/Stop 1.17
206 TestMountStart/serial/RestartStopped 22.88
207 TestMountStart/serial/VerifyMountPostStop 0.39
210 TestMultiNode/serial/FreshStart2Nodes 180.39
211 TestMultiNode/serial/DeployApp2Nodes 6.56
212 TestMultiNode/serial/PingHostFrom2Pods 0.92
213 TestMultiNode/serial/AddNode 43.38
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.2
216 TestMultiNode/serial/CopyFile 7.61
217 TestMultiNode/serial/StopNode 2.14
218 TestMultiNode/serial/StartAfterStop 28.1
219 TestMultiNode/serial/RestartKeepsNodes 310.04
220 TestMultiNode/serial/DeleteNode 1.71
221 TestMultiNode/serial/StopMultiNode 183.23
222 TestMultiNode/serial/RestartMultiNode 92.38
223 TestMultiNode/serial/ValidateNameConflict 50.06
228 TestPreload 445.65
230 TestScheduledStopUnix 121.88
234 TestRunningBinaryUpgrade 242.05
236 TestKubernetesUpgrade 215.02
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/StartWithK8s 106.67
241 TestNoKubernetes/serial/StartWithStopK8s 74.69
242 TestNoKubernetes/serial/Start 40.98
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
244 TestNoKubernetes/serial/ProfileList 16.04
252 TestNetworkPlugins/group/false 3.31
256 TestNoKubernetes/serial/Stop 1.35
257 TestNoKubernetes/serial/StartNoArgs 49.33
259 TestPause/serial/Start 67.38
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
268 TestStoppedBinaryUpgrade/Setup 3.09
269 TestStoppedBinaryUpgrade/Upgrade 196.13
270 TestPause/serial/SecondStartNoReconfiguration 34.91
271 TestPause/serial/Pause 0.97
272 TestPause/serial/VerifyStatus 0.28
273 TestPause/serial/Unpause 0.91
274 TestPause/serial/PauseAgain 0.89
275 TestPause/serial/DeletePaused 1.5
276 TestPause/serial/VerifyDeletedResources 0.55
277 TestNetworkPlugins/group/auto/Start 106.77
278 TestNetworkPlugins/group/kindnet/Start 93.43
279 TestNetworkPlugins/group/auto/KubeletFlags 0.25
280 TestNetworkPlugins/group/auto/NetCatPod 13.32
281 TestNetworkPlugins/group/calico/Start 102.97
282 TestNetworkPlugins/group/auto/DNS 0.24
283 TestNetworkPlugins/group/auto/Localhost 0.19
284 TestNetworkPlugins/group/auto/HairPin 0.2
285 TestNetworkPlugins/group/custom-flannel/Start 90.19
286 TestStoppedBinaryUpgrade/MinikubeLogs 3.84
287 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
288 TestNetworkPlugins/group/kindnet/KubeletFlags 1.17
289 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
290 TestNetworkPlugins/group/enable-default-cni/Start 120.96
291 TestNetworkPlugins/group/kindnet/DNS 0.19
292 TestNetworkPlugins/group/kindnet/Localhost 0.2
293 TestNetworkPlugins/group/kindnet/HairPin 0.2
294 TestNetworkPlugins/group/flannel/Start 103.1
295 TestNetworkPlugins/group/calico/ControllerPod 6.01
296 TestNetworkPlugins/group/calico/KubeletFlags 0.24
297 TestNetworkPlugins/group/calico/NetCatPod 10.26
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
299 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
300 TestNetworkPlugins/group/calico/DNS 0.24
301 TestNetworkPlugins/group/calico/Localhost 0.21
302 TestNetworkPlugins/group/calico/HairPin 0.24
303 TestNetworkPlugins/group/custom-flannel/DNS 0.25
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
306 TestNetworkPlugins/group/bridge/Start 103.85
308 TestStartStop/group/old-k8s-version/serial/FirstStart 153.65
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.32
311 TestNetworkPlugins/group/flannel/ControllerPod 6.01
312 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
313 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
314 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
316 TestNetworkPlugins/group/flannel/NetCatPod 9.27
317 TestNetworkPlugins/group/flannel/DNS 0.22
318 TestNetworkPlugins/group/flannel/Localhost 0.16
319 TestNetworkPlugins/group/flannel/HairPin 0.19
321 TestStartStop/group/no-preload/serial/FirstStart 150.08
323 TestStartStop/group/embed-certs/serial/FirstStart 118.84
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
325 TestNetworkPlugins/group/bridge/NetCatPod 10.25
326 TestNetworkPlugins/group/bridge/DNS 0.18
327 TestNetworkPlugins/group/bridge/Localhost 0.15
328 TestNetworkPlugins/group/bridge/HairPin 0.16
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 104.18
331 TestStartStop/group/old-k8s-version/serial/DeployApp 15.46
332 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
333 TestStartStop/group/old-k8s-version/serial/Stop 92.52
334 TestStartStop/group/embed-certs/serial/DeployApp 11.3
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
336 TestStartStop/group/embed-certs/serial/Stop 92.34
337 TestStartStop/group/no-preload/serial/DeployApp 10.27
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
339 TestStartStop/group/no-preload/serial/Stop 92.32
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.79
343 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
344 TestStartStop/group/old-k8s-version/serial/SecondStart 388.7
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
346 TestStartStop/group/embed-certs/serial/SecondStart 309.45
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/no-preload/serial/SecondStart 313.86
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 365.89
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/embed-certs/serial/Pause 2.84
356 TestStartStop/group/newest-cni/serial/FirstStart 67.03
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
359 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
360 TestStartStop/group/no-preload/serial/Pause 2.64
361 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
364 TestStartStop/group/old-k8s-version/serial/Pause 2.87
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.54
367 TestStartStop/group/newest-cni/serial/Stop 2.11
368 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
369 TestStartStop/group/newest-cni/serial/SecondStart 43.72
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
377 TestStartStop/group/newest-cni/serial/Pause 2.43
x
+
TestDownloadOnly/v1.16.0/json-events (67.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m7.851151073s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (67.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-134172
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-134172: exit status 85 (73.135069ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |          |
	|         | -p download-only-134172        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:37:23
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:37:23.200720   13620 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:37:23.200973   13620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:23.200982   13620 out.go:309] Setting ErrFile to fd 2...
	I1218 22:37:23.200987   13620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:23.201155   13620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	W1218 22:37:23.201253   13620 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: no such file or directory
	I1218 22:37:23.201782   13620 out.go:303] Setting JSON to true
	I1218 22:37:23.202546   13620 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1189,"bootTime":1702937854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:37:23.202601   13620 start.go:138] virtualization: kvm guest
	I1218 22:37:23.205152   13620 out.go:97] [download-only-134172] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 22:37:23.206751   13620 out.go:169] MINIKUBE_LOCATION=17822
	W1218 22:37:23.205245   13620 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 22:37:23.205290   13620 notify.go:220] Checking for updates...
	I1218 22:37:23.209686   13620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:37:23.211213   13620 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:37:23.212793   13620 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:37:23.214297   13620 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 22:37:23.217185   13620 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 22:37:23.217423   13620 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:37:23.320442   13620 out.go:97] Using the kvm2 driver based on user configuration
	I1218 22:37:23.320474   13620 start.go:298] selected driver: kvm2
	I1218 22:37:23.320482   13620 start.go:902] validating driver "kvm2" against <nil>
	I1218 22:37:23.320795   13620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:37:23.320900   13620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17822-6323/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 22:37:23.334866   13620 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 22:37:23.334937   13620 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 22:37:23.335427   13620 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1218 22:37:23.335567   13620 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 22:37:23.335619   13620 cni.go:84] Creating CNI manager for ""
	I1218 22:37:23.335631   13620 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:37:23.335640   13620 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 22:37:23.335648   13620 start_flags.go:323] config:
	{Name:download-only-134172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-134172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:37:23.335838   13620 iso.go:125] acquiring lock: {Name:mk45271b640590b559d12c4c43666d7b9d627a43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:37:23.337556   13620 out.go:97] Downloading VM boot image ...
	I1218 22:37:23.337600   13620 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1218 22:37:34.471929   13620 out.go:97] Starting control plane node download-only-134172 in cluster download-only-134172
	I1218 22:37:34.471952   13620 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 22:37:34.633639   13620 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1218 22:37:34.633669   13620 cache.go:56] Caching tarball of preloaded images
	I1218 22:37:34.633813   13620 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 22:37:34.635982   13620 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 22:37:34.635998   13620 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:37:34.800997   13620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1218 22:37:58.248980   13620 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:37:58.249069   13620 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:37:59.157641   13620 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1218 22:37:59.158025   13620 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/download-only-134172/config.json ...
	I1218 22:37:59.158060   13620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/download-only-134172/config.json: {Name:mk53eb4e8c66c1e21ac901d89c8dbefaa38d2c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:59.158245   13620 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1218 22:37:59.158455   13620 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-134172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (51.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (51.164516455s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (51.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-134172
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-134172: exit status 85 (73.541052ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |          |
	|         | -p download-only-134172        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:38 UTC |          |
	|         | -p download-only-134172        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:38:31
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:38:31.127696   13814 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:38:31.127833   13814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:38:31.127842   13814 out.go:309] Setting ErrFile to fd 2...
	I1218 22:38:31.127847   13814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:38:31.128047   13814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	W1218 22:38:31.128173   13814 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: no such file or directory
	I1218 22:38:31.128624   13814 out.go:303] Setting JSON to true
	I1218 22:38:31.129446   13814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1257,"bootTime":1702937854,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:38:31.129501   13814 start.go:138] virtualization: kvm guest
	I1218 22:38:31.132109   13814 out.go:97] [download-only-134172] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 22:38:31.134018   13814 out.go:169] MINIKUBE_LOCATION=17822
	I1218 22:38:31.132338   13814 notify.go:220] Checking for updates...
	I1218 22:38:31.137296   13814 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:38:31.138879   13814 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:38:31.140562   13814 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:38:31.142054   13814 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 22:38:31.144900   13814 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 22:38:31.145350   13814 config.go:182] Loaded profile config "download-only-134172": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1218 22:38:31.145390   13814 start.go:810] api.Load failed for download-only-134172: filestore "download-only-134172": Docker machine "download-only-134172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:38:31.145457   13814 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 22:38:31.145484   13814 start.go:810] api.Load failed for download-only-134172: filestore "download-only-134172": Docker machine "download-only-134172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:38:31.176907   13814 out.go:97] Using the kvm2 driver based on existing profile
	I1218 22:38:31.176928   13814 start.go:298] selected driver: kvm2
	I1218 22:38:31.176933   13814 start.go:902] validating driver "kvm2" against &{Name:download-only-134172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-134172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:38:31.177278   13814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:38:31.177345   13814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17822-6323/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 22:38:31.191128   13814 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 22:38:31.191777   13814 cni.go:84] Creating CNI manager for ""
	I1218 22:38:31.191793   13814 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:38:31.191806   13814 start_flags.go:323] config:
	{Name:download-only-134172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-134172 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:38:31.191918   13814 iso.go:125] acquiring lock: {Name:mk45271b640590b559d12c4c43666d7b9d627a43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:38:31.193691   13814 out.go:97] Starting control plane node download-only-134172 in cluster download-only-134172
	I1218 22:38:31.193709   13814 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 22:38:31.849367   13814 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1218 22:38:31.849403   13814 cache.go:56] Caching tarball of preloaded images
	I1218 22:38:31.849548   13814 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 22:38:31.851600   13814 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 22:38:31.851630   13814 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:38:32.016028   13814 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I1218 22:38:49.460514   13814 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:38:49.460620   13814 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:38:50.397968   13814 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I1218 22:38:50.398087   13814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/download-only-134172/config.json ...
	I1218 22:38:50.398297   13814 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I1218 22:38:50.398452   13814 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-134172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (49.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-134172 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (49.281700658s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (49.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-134172
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-134172: exit status 85 (73.485476ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |          |
	|         | -p download-only-134172           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:38 UTC |          |
	|         | -p download-only-134172           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-134172 | jenkins | v1.32.0 | 18 Dec 23 22:39 UTC |          |
	|         | -p download-only-134172           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:39:22
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:39:22.369073   13960 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:39:22.369355   13960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:39:22.369365   13960 out.go:309] Setting ErrFile to fd 2...
	I1218 22:39:22.369369   13960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:39:22.369598   13960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	W1218 22:39:22.369756   13960 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-6323/.minikube/config/config.json: no such file or directory
	I1218 22:39:22.370199   13960 out.go:303] Setting JSON to true
	I1218 22:39:22.371004   13960 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1309,"bootTime":1702937854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:39:22.371063   13960 start.go:138] virtualization: kvm guest
	I1218 22:39:22.373394   13960 out.go:97] [download-only-134172] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 22:39:22.375111   13960 out.go:169] MINIKUBE_LOCATION=17822
	I1218 22:39:22.373569   13960 notify.go:220] Checking for updates...
	I1218 22:39:22.378207   13960 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:39:22.379653   13960 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:39:22.381041   13960 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:39:22.382401   13960 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 22:39:22.385086   13960 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 22:39:22.385552   13960 config.go:182] Loaded profile config "download-only-134172": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	W1218 22:39:22.385589   13960 start.go:810] api.Load failed for download-only-134172: filestore "download-only-134172": Docker machine "download-only-134172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:39:22.385660   13960 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 22:39:22.385689   13960 start.go:810] api.Load failed for download-only-134172: filestore "download-only-134172": Docker machine "download-only-134172" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:39:22.417064   13960 out.go:97] Using the kvm2 driver based on existing profile
	I1218 22:39:22.417092   13960 start.go:298] selected driver: kvm2
	I1218 22:39:22.417098   13960 start.go:902] validating driver "kvm2" against &{Name:download-only-134172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-134172 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:39:22.417450   13960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:39:22.417516   13960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17822-6323/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 22:39:22.430905   13960 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 22:39:22.431682   13960 cni.go:84] Creating CNI manager for ""
	I1218 22:39:22.431699   13960 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1218 22:39:22.431710   13960 start_flags.go:323] config:
	{Name:download-only-134172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-134172 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:39:22.431879   13960 iso.go:125] acquiring lock: {Name:mk45271b640590b559d12c4c43666d7b9d627a43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:39:22.433694   13960 out.go:97] Starting control plane node download-only-134172 in cluster download-only-134172
	I1218 22:39:22.433717   13960 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1218 22:39:23.082835   13960 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1218 22:39:23.082868   13960 cache.go:56] Caching tarball of preloaded images
	I1218 22:39:23.083036   13960 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1218 22:39:23.085133   13960 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1218 22:39:23.085158   13960 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:39:23.258937   13960 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:24c8d97965ae2515db31ece6a310bbf9 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I1218 22:39:37.732841   13960 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:39:37.733621   13960 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-6323/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I1218 22:39:38.546405   13960 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I1218 22:39:38.546535   13960 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/download-only-134172/config.json ...
	I1218 22:39:38.546734   13960 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I1218 22:39:38.546925   13960 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17822-6323/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-134172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-134172
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-383810 --alsologtostderr --binary-mirror http://127.0.0.1:35955 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-383810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-383810
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (103.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-319909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-319909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m42.594941928s)
helpers_test.go:175: Cleaning up "offline-containerd-319909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-319909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-319909: (1.012637113s)
--- PASS: TestOffline (103.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-522125
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-522125: exit status 85 (64.537456ms)

                                                
                                                
-- stdout --
	* Profile "addons-522125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-522125
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-522125: exit status 85 (63.067688ms)

                                                
                                                
-- stdout --
	* Profile "addons-522125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (155.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-522125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-522125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.131103799s)
--- PASS: TestAddons/Setup (155.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 23.712238ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-mdc25" [af8d151e-3121-43f3-8722-7b99e5d5a1c5] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005327961s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h4tds" [e6b6af54-7a7a-4493-8ad0-7b28fdde2c62] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005959336s
addons_test.go:339: (dbg) Run:  kubectl --context addons-522125 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-522125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-522125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.037115847s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-522125 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-522125 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-522125 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [69ccf736-c717-4c0c-8c51-5aba4e2a6ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [69ccf736-c717-4c0c-8c51-5aba4e2a6ce2] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004465704s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-522125 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.206
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-522125 addons disable ingress-dns --alsologtostderr -v=1: (1.551481211s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-522125 addons disable ingress --alsologtostderr -v=1: (7.971183526s)
--- PASS: TestAddons/parallel/Ingress (23.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9l58c" [3911ed10-54e3-459f-b7d3-9cd41e0094a6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004198459s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-522125
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-522125: (6.018112905s)
--- PASS: TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 23.742279ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-98lx4" [e8b44252-6b91-46f8-aa31-02c97367111f] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00729261s
addons_test.go:414: (dbg) Run:  kubectl --context addons-522125 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.46s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 17.933132ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mwq2f" [cb75981a-563b-4f1f-88bf-5848ade2b8ff] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005850614s
addons_test.go:472: (dbg) Run:  kubectl --context addons-522125 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-522125 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.812800792s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.46s)

                                                
                                    
x
+
TestAddons/parallel/CSI (98.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 24.476644ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-522125 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/12/18 22:43:05 [DEBUG] GET http://192.168.39.206:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-522125 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b00515f2-a1b5-41d3-9b72-e4654e22b09f] Pending
helpers_test.go:344: "task-pv-pod" [b00515f2-a1b5-41d3-9b72-e4654e22b09f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b00515f2-a1b5-41d3-9b72-e4654e22b09f] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005244964s
addons_test.go:583: (dbg) Run:  kubectl --context addons-522125 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-522125 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-522125 delete pod task-pv-pod: (1.140378739s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-522125 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-522125 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-522125 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c8e5a8e9-fd03-44bd-88d0-29060eaf43e4] Pending
helpers_test.go:344: "task-pv-pod-restore" [c8e5a8e9-fd03-44bd-88d0-29060eaf43e4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c8e5a8e9-fd03-44bd-88d0-29060eaf43e4] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.005107419s
addons_test.go:625: (dbg) Run:  kubectl --context addons-522125 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-522125 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-522125 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-522125 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.738376099s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (98.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-4wtd9" [12aa5cf5-1683-4157-ac05-618c70ef81dc] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006870254s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-522125
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-522125 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-522125 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [61f4de79-397b-4f18-8415-cc711903dabd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [61f4de79-397b-4f18-8415-cc711903dabd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [61f4de79-397b-4f18-8415-cc711903dabd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005506543s
addons_test.go:890: (dbg) Run:  kubectl --context addons-522125 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 ssh "cat /opt/local-path-provisioner/pvc-5df8efa3-be39-4112-b71b-2a19c24d1a6e_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-522125 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-522125 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-522125 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pgwhq" [b1d07de4-f656-4c4d-9bf0-e7c14b3514a4] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005364435s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-522125
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-522125 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-522125 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-522125
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-522125: (1m31.796524136s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-522125
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-522125
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-522125
--- PASS: TestAddons/StoppedEnableDisable (92.09s)

                                                
                                    
x
+
TestCertOptions (61.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-637340 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-637340 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (59.623426459s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-637340 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-637340 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-637340 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-637340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-637340
--- PASS: TestCertOptions (61.10s)

                                                
                                    
x
+
TestCertExpiration (300.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-177378 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-177378 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m21.995288677s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-177378 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-177378 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (36.520340397s)
helpers_test.go:175: Cleaning up "cert-expiration-177378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-177378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-177378: (1.548319345s)
--- PASS: TestCertExpiration (300.06s)

                                                
                                    
x
+
TestForceSystemdFlag (104s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-334282 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-334282 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m42.897977152s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-334282 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-334282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-334282
--- PASS: TestForceSystemdFlag (104.00s)

                                                
                                    
x
+
TestForceSystemdEnv (56.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-415562 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-415562 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (54.547777406s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-415562 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-415562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-415562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-415562: (1.548125329s)
--- PASS: TestForceSystemdEnv (56.32s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.33s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.33s)

                                                
                                    
x
+
TestErrorSpam/setup (47.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-843732 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-843732 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-843732 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-843732 --driver=kvm2  --container-runtime=containerd: (47.367191318s)
--- PASS: TestErrorSpam/setup (47.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 stop: (1.394512449s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-843732 --log_dir /tmp/nospam-843732 stop
--- PASS: TestErrorSpam/stop (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17822-6323/.minikube/files/etc/test/nested/copy/13608/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1218 22:47:47.848365   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:47.854085   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:47.864339   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:47.884606   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:47.924874   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:48.005206   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:48.165477   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:48.486080   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:49.127010   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:50.407513   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:52.968421   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:47:58.089367   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:48:08.329635   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:48:28.809808   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-220634 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.397410695s)
--- PASS: TestFunctional/serial/StartWithProxy (99.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-220634 --alsologtostderr -v=8: (6.035117777s)
functional_test.go:659: soft start took 6.035808891s for "functional-220634" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-220634 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:3.1: (1.394192347s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:3.3: (1.35705085s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 cache add registry.k8s.io/pause:latest: (1.291860363s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-220634 /tmp/TestFunctionalserialCacheCmdcacheadd_local1262588616/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache add minikube-local-cache-test:functional-220634
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 cache add minikube-local-cache-test:functional-220634: (2.573614539s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache delete minikube-local-cache-test:functional-220634
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-220634
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.979847ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 cache reload: (1.16226801s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 kubectl -- --context functional-220634 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-220634 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 22:49:09.770070   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-220634 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.386340919s)
functional_test.go:757: restart took 43.386521343s for "functional-220634" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-220634 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 logs: (1.518621978s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 logs --file /tmp/TestFunctionalserialLogsFileCmd775599222/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 logs --file /tmp/TestFunctionalserialLogsFileCmd775599222/001/logs.txt: (1.461580907s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-220634 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-220634
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-220634: exit status 115 (291.741859ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.254:30909 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-220634 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 config get cpus: exit status 14 (58.295771ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 config get cpus: exit status 14 (54.762764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-220634 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-220634 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21335: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-220634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (143.057281ms)

                                                
                                                
-- stdout --
	* [functional-220634] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:50:27.246052   21233 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:50:27.246314   21233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:50:27.246323   21233 out.go:309] Setting ErrFile to fd 2...
	I1218 22:50:27.246328   21233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:50:27.246489   21233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 22:50:27.246999   21233 out.go:303] Setting JSON to false
	I1218 22:50:27.247823   21233 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1974,"bootTime":1702937854,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:50:27.247879   21233 start.go:138] virtualization: kvm guest
	I1218 22:50:27.250064   21233 out.go:177] * [functional-220634] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 22:50:27.251821   21233 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:50:27.251825   21233 notify.go:220] Checking for updates...
	I1218 22:50:27.253494   21233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:50:27.255019   21233 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:50:27.256962   21233 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:50:27.258606   21233 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 22:50:27.259983   21233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:50:27.261769   21233 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:50:27.262266   21233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:50:27.262309   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:50:27.277733   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I1218 22:50:27.278131   21233 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:50:27.278675   21233 main.go:141] libmachine: Using API Version  1
	I1218 22:50:27.278697   21233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:50:27.279026   21233 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:50:27.279234   21233 main.go:141] libmachine: (functional-220634) Calling .DriverName
	I1218 22:50:27.279497   21233 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:50:27.279762   21233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:50:27.279793   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:50:27.293649   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I1218 22:50:27.294020   21233 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:50:27.294425   21233 main.go:141] libmachine: Using API Version  1
	I1218 22:50:27.294445   21233 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:50:27.294763   21233 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:50:27.294945   21233 main.go:141] libmachine: (functional-220634) Calling .DriverName
	I1218 22:50:27.325933   21233 out.go:177] * Using the kvm2 driver based on existing profile
	I1218 22:50:27.327318   21233 start.go:298] selected driver: kvm2
	I1218 22:50:27.327334   21233 start.go:902] validating driver "kvm2" against &{Name:functional-220634 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-220634 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:50:27.327420   21233 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:50:27.329451   21233 out.go:177] 
	W1218 22:50:27.330836   21233 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 22:50:27.332184   21233 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-220634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-220634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (162.713397ms)

                                                
                                                
-- stdout --
	* [functional-220634] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:50:22.925795   20932 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:50:22.925949   20932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:50:22.925959   20932 out.go:309] Setting ErrFile to fd 2...
	I1218 22:50:22.925966   20932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:50:22.926285   20932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 22:50:22.926838   20932 out.go:303] Setting JSON to false
	I1218 22:50:22.927673   20932 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1969,"bootTime":1702937854,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 22:50:22.927737   20932 start.go:138] virtualization: kvm guest
	I1218 22:50:22.929999   20932 out.go:177] * [functional-220634] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1218 22:50:22.931601   20932 notify.go:220] Checking for updates...
	I1218 22:50:22.931614   20932 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:50:22.933187   20932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:50:22.935020   20932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 22:50:22.936680   20932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 22:50:22.938346   20932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 22:50:22.939963   20932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:50:22.941810   20932 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 22:50:22.942197   20932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:50:22.942242   20932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:50:22.956182   20932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I1218 22:50:22.956665   20932 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:50:22.957364   20932 main.go:141] libmachine: Using API Version  1
	I1218 22:50:22.957400   20932 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:50:22.957720   20932 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:50:22.957859   20932 main.go:141] libmachine: (functional-220634) Calling .DriverName
	I1218 22:50:22.958063   20932 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:50:22.958346   20932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 22:50:22.958383   20932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 22:50:22.979514   20932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
	I1218 22:50:22.979918   20932 main.go:141] libmachine: () Calling .GetVersion
	I1218 22:50:22.980401   20932 main.go:141] libmachine: Using API Version  1
	I1218 22:50:22.980424   20932 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 22:50:22.980729   20932 main.go:141] libmachine: () Calling .GetMachineName
	I1218 22:50:22.981037   20932 main.go:141] libmachine: (functional-220634) Calling .DriverName
	I1218 22:50:23.023222   20932 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1218 22:50:23.024928   20932 start.go:298] selected driver: kvm2
	I1218 22:50:23.024947   20932 start.go:902] validating driver "kvm2" against &{Name:functional-220634 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-220634 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:50:23.025078   20932 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:50:23.027699   20932 out.go:177] 
	W1218 22:50:23.029199   20932 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 22:50:23.030884   20932 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-220634 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-220634 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-kcz2l" [387026c3-7d62-4edf-871e-9a38095ec0e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-kcz2l" [387026c3-7d62-4edf-871e-9a38095ec0e8] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.021518777s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.254:30912
functional_test.go:1674: http://192.168.50.254:30912: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-kcz2l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.254:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.254:30912
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8530799f-1c1e-4fa3-a4cc-be8a6dd90273] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00906089s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-220634 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-220634 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-220634 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-220634 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-220634 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [599583a9-5f11-46ba-a0ca-4d2af1bfdee8] Pending
helpers_test.go:344: "sp-pod" [599583a9-5f11-46ba-a0ca-4d2af1bfdee8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [599583a9-5f11-46ba-a0ca-4d2af1bfdee8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003944344s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-220634 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-220634 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-220634 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef9dc714-122c-4cdf-9c89-01fb4a84f544] Pending
helpers_test.go:344: "sp-pod" [ef9dc714-122c-4cdf-9c89-01fb4a84f544] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef9dc714-122c-4cdf-9c89-01fb4a84f544] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004773642s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-220634 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh -n functional-220634 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cp functional-220634:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1938550735/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh -n functional-220634 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh -n functional-220634 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-220634 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-r9rrj" [345640fa-8959-40fa-8de2-fff987a6f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-r9rrj" [345640fa-8959-40fa-8de2-fff987a6f3c1] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.007880096s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;": exit status 1 (259.511859ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;": exit status 1 (401.163123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;": exit status 1 (228.068948ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-220634 exec mysql-859648c796-r9rrj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/13608/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /etc/test/nested/copy/13608/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/13608.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /etc/ssl/certs/13608.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/13608.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /usr/share/ca-certificates/13608.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/136082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /etc/ssl/certs/136082.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/136082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /usr/share/ca-certificates/136082.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-220634 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "sudo systemctl is-active docker": exit status 1 (236.185585ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "sudo systemctl is-active crio": exit status 1 (219.993613ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-220634 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-220634
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-220634
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-220634 image ls --format short --alsologtostderr:
I1218 22:50:34.318603   21880 out.go:296] Setting OutFile to fd 1 ...
I1218 22:50:34.318758   21880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:34.318778   21880 out.go:309] Setting ErrFile to fd 2...
I1218 22:50:34.318792   21880 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:34.319105   21880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
I1218 22:50:34.319747   21880 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:34.319878   21880 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:34.320278   21880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:34.320351   21880 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:34.334888   21880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
I1218 22:50:34.335405   21880 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:34.335991   21880 main.go:141] libmachine: Using API Version  1
I1218 22:50:34.336018   21880 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:34.336412   21880 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:34.336625   21880 main.go:141] libmachine: (functional-220634) Calling .GetState
I1218 22:50:34.338563   21880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:34.338611   21880 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:34.355160   21880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
I1218 22:50:34.355507   21880 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:34.356042   21880 main.go:141] libmachine: Using API Version  1
I1218 22:50:34.356060   21880 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:34.356337   21880 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:34.356515   21880 main.go:141] libmachine: (functional-220634) Calling .DriverName
I1218 22:50:34.356801   21880 ssh_runner.go:195] Run: systemctl --version
I1218 22:50:34.356834   21880 main.go:141] libmachine: (functional-220634) Calling .GetSSHHostname
I1218 22:50:34.359974   21880 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:34.360259   21880 main.go:141] libmachine: (functional-220634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:ac:11", ip: ""} in network mk-functional-220634: {Iface:virbr1 ExpiryTime:2023-12-18 23:47:28 +0000 UTC Type:0 Mac:52:54:00:75:ac:11 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:functional-220634 Clientid:01:52:54:00:75:ac:11}
I1218 22:50:34.360320   21880 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined IP address 192.168.50.254 and MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:34.360579   21880 main.go:141] libmachine: (functional-220634) Calling .GetSSHPort
I1218 22:50:34.360758   21880 main.go:141] libmachine: (functional-220634) Calling .GetSSHKeyPath
I1218 22:50:34.360902   21880 main.go:141] libmachine: (functional-220634) Calling .GetSSHUsername
I1218 22:50:34.361056   21880 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/functional-220634/id_rsa Username:docker}
I1218 22:50:34.452754   21880 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 22:50:34.509906   21880 main.go:141] libmachine: Making call to close driver server
I1218 22:50:34.509918   21880 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:34.510294   21880 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:34.510315   21880 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:34.510309   21880 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
I1218 22:50:34.510328   21880 main.go:141] libmachine: Making call to close driver server
I1218 22:50:34.510342   21880 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:34.510564   21880 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:34.510580   21880 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:34.510640   21880 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-220634 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/nginx                     | latest             | sha256:a6bd71 | 70.5MB |
| gcr.io/google-containers/addon-resizer      | functional-220634  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-220634  | sha256:a30cce | 1.01kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-220634 image ls --format table --alsologtostderr:
I1218 22:50:36.375361   22217 out.go:296] Setting OutFile to fd 1 ...
I1218 22:50:36.375473   22217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:36.375481   22217 out.go:309] Setting ErrFile to fd 2...
I1218 22:50:36.375486   22217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:36.375659   22217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
I1218 22:50:36.376179   22217 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:36.376274   22217 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:36.376720   22217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:36.376765   22217 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:36.390498   22217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
I1218 22:50:36.390903   22217 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:36.391427   22217 main.go:141] libmachine: Using API Version  1
I1218 22:50:36.391443   22217 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:36.391829   22217 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:36.392044   22217 main.go:141] libmachine: (functional-220634) Calling .GetState
I1218 22:50:36.393943   22217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:36.393986   22217 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:36.407335   22217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
I1218 22:50:36.407705   22217 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:36.408179   22217 main.go:141] libmachine: Using API Version  1
I1218 22:50:36.408207   22217 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:36.408537   22217 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:36.408726   22217 main.go:141] libmachine: (functional-220634) Calling .DriverName
I1218 22:50:36.408914   22217 ssh_runner.go:195] Run: systemctl --version
I1218 22:50:36.408937   22217 main.go:141] libmachine: (functional-220634) Calling .GetSSHHostname
I1218 22:50:36.411338   22217 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:36.411722   22217 main.go:141] libmachine: (functional-220634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:ac:11", ip: ""} in network mk-functional-220634: {Iface:virbr1 ExpiryTime:2023-12-18 23:47:28 +0000 UTC Type:0 Mac:52:54:00:75:ac:11 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:functional-220634 Clientid:01:52:54:00:75:ac:11}
I1218 22:50:36.411747   22217 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined IP address 192.168.50.254 and MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:36.411871   22217 main.go:141] libmachine: (functional-220634) Calling .GetSSHPort
I1218 22:50:36.412037   22217 main.go:141] libmachine: (functional-220634) Calling .GetSSHKeyPath
I1218 22:50:36.412173   22217 main.go:141] libmachine: (functional-220634) Calling .GetSSHUsername
I1218 22:50:36.412322   22217 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/functional-220634/id_rsa Username:docker}
I1218 22:50:36.499815   22217 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 22:50:36.565757   22217 main.go:141] libmachine: Making call to close driver server
I1218 22:50:36.565778   22217 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:36.566190   22217 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
I1218 22:50:36.566199   22217 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:36.566227   22217 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:36.566246   22217 main.go:141] libmachine: Making call to close driver server
I1218 22:50:36.566260   22217 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:36.566528   22217 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:36.566545   22217 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:36.566703   22217 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-220634 image ls --format json --alsologtostderr:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"rep
oTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.
k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-220634"],"size":"10823156"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kin
dnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"70544635"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDige
sts":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:a30cce5a45991a4dc147399175117613e40aa4758df4fb807c8115d99bb2b7d0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-220634"],"size":"1007"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-220634 image ls --format json --alsologtostderr:
I1218 22:50:36.142625   22193 out.go:296] Setting OutFile to fd 1 ...
I1218 22:50:36.142864   22193 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:36.142873   22193 out.go:309] Setting ErrFile to fd 2...
I1218 22:50:36.142878   22193 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:36.143103   22193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
I1218 22:50:36.143668   22193 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:36.143765   22193 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:36.144168   22193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:36.144222   22193 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:36.158602   22193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
I1218 22:50:36.159008   22193 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:36.159654   22193 main.go:141] libmachine: Using API Version  1
I1218 22:50:36.159682   22193 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:36.160074   22193 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:36.160276   22193 main.go:141] libmachine: (functional-220634) Calling .GetState
I1218 22:50:36.162102   22193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:36.162136   22193 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:36.176063   22193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
I1218 22:50:36.176448   22193 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:36.176899   22193 main.go:141] libmachine: Using API Version  1
I1218 22:50:36.176924   22193 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:36.177270   22193 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:36.177475   22193 main.go:141] libmachine: (functional-220634) Calling .DriverName
I1218 22:50:36.177667   22193 ssh_runner.go:195] Run: systemctl --version
I1218 22:50:36.177697   22193 main.go:141] libmachine: (functional-220634) Calling .GetSSHHostname
I1218 22:50:36.180519   22193 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:36.180923   22193 main.go:141] libmachine: (functional-220634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:ac:11", ip: ""} in network mk-functional-220634: {Iface:virbr1 ExpiryTime:2023-12-18 23:47:28 +0000 UTC Type:0 Mac:52:54:00:75:ac:11 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:functional-220634 Clientid:01:52:54:00:75:ac:11}
I1218 22:50:36.180961   22193 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined IP address 192.168.50.254 and MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:36.181054   22193 main.go:141] libmachine: (functional-220634) Calling .GetSSHPort
I1218 22:50:36.181234   22193 main.go:141] libmachine: (functional-220634) Calling .GetSSHKeyPath
I1218 22:50:36.181390   22193 main.go:141] libmachine: (functional-220634) Calling .GetSSHUsername
I1218 22:50:36.181499   22193 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/functional-220634/id_rsa Username:docker}
I1218 22:50:36.270913   22193 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 22:50:36.317573   22193 main.go:141] libmachine: Making call to close driver server
I1218 22:50:36.317589   22193 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:36.317850   22193 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:36.317870   22193 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:36.317882   22193 main.go:141] libmachine: Making call to close driver server
I1218 22:50:36.317893   22193 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:36.318076   22193 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:36.318090   22193 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:36.318112   22193 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-220634 image ls --format yaml --alsologtostderr:
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:a30cce5a45991a4dc147399175117613e40aa4758df4fb807c8115d99bb2b7d0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-220634
size: "1007"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-220634
size: "10823156"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
repoTags:
- docker.io/library/nginx:latest
size: "70544635"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-220634 image ls --format yaml --alsologtostderr:
I1218 22:50:34.567814   21922 out.go:296] Setting OutFile to fd 1 ...
I1218 22:50:34.568104   21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:34.568117   21922 out.go:309] Setting ErrFile to fd 2...
I1218 22:50:34.568124   21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:34.568431   21922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
I1218 22:50:34.569083   21922 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:34.569187   21922 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:34.569642   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:34.569699   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:34.584992   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
I1218 22:50:34.585414   21922 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:34.586031   21922 main.go:141] libmachine: Using API Version  1
I1218 22:50:34.586057   21922 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:34.586365   21922 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:34.586554   21922 main.go:141] libmachine: (functional-220634) Calling .GetState
I1218 22:50:34.588414   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:34.588451   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:34.610595   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
I1218 22:50:34.611286   21922 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:34.612369   21922 main.go:141] libmachine: Using API Version  1
I1218 22:50:34.612389   21922 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:34.614230   21922 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:34.615084   21922 main.go:141] libmachine: (functional-220634) Calling .DriverName
I1218 22:50:34.615343   21922 ssh_runner.go:195] Run: systemctl --version
I1218 22:50:34.615372   21922 main.go:141] libmachine: (functional-220634) Calling .GetSSHHostname
I1218 22:50:34.619260   21922 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:34.619889   21922 main.go:141] libmachine: (functional-220634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:ac:11", ip: ""} in network mk-functional-220634: {Iface:virbr1 ExpiryTime:2023-12-18 23:47:28 +0000 UTC Type:0 Mac:52:54:00:75:ac:11 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:functional-220634 Clientid:01:52:54:00:75:ac:11}
I1218 22:50:34.619927   21922 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined IP address 192.168.50.254 and MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:34.620138   21922 main.go:141] libmachine: (functional-220634) Calling .GetSSHPort
I1218 22:50:34.620369   21922 main.go:141] libmachine: (functional-220634) Calling .GetSSHKeyPath
I1218 22:50:34.620583   21922 main.go:141] libmachine: (functional-220634) Calling .GetSSHUsername
I1218 22:50:34.620730   21922 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/functional-220634/id_rsa Username:docker}
I1218 22:50:34.738285   21922 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 22:50:34.871178   21922 main.go:141] libmachine: Making call to close driver server
I1218 22:50:34.871195   21922 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:34.871453   21922 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:34.871473   21922 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:34.871472   21922 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
I1218 22:50:34.871484   21922 main.go:141] libmachine: Making call to close driver server
I1218 22:50:34.871494   21922 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:34.871721   21922 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:34.871745   21922 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:34.871849   21922 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh pgrep buildkitd: exit status 1 (298.178493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image build -t localhost/my-image:functional-220634 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image build -t localhost/my-image:functional-220634 testdata/build --alsologtostderr: (5.195416615s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-220634 image build -t localhost/my-image:functional-220634 testdata/build --alsologtostderr:
I1218 22:50:35.241893   22070 out.go:296] Setting OutFile to fd 1 ...
I1218 22:50:35.242049   22070 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:35.242079   22070 out.go:309] Setting ErrFile to fd 2...
I1218 22:50:35.242085   22070 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:50:35.242317   22070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
I1218 22:50:35.242977   22070 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:35.243533   22070 config.go:182] Loaded profile config "functional-220634": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I1218 22:50:35.244077   22070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:35.244121   22070 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:35.263862   22070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
I1218 22:50:35.264327   22070 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:35.264924   22070 main.go:141] libmachine: Using API Version  1
I1218 22:50:35.264939   22070 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:35.265327   22070 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:35.265536   22070 main.go:141] libmachine: (functional-220634) Calling .GetState
I1218 22:50:35.267390   22070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1218 22:50:35.267445   22070 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 22:50:35.281308   22070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
I1218 22:50:35.281690   22070 main.go:141] libmachine: () Calling .GetVersion
I1218 22:50:35.282098   22070 main.go:141] libmachine: Using API Version  1
I1218 22:50:35.282124   22070 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 22:50:35.282513   22070 main.go:141] libmachine: () Calling .GetMachineName
I1218 22:50:35.282673   22070 main.go:141] libmachine: (functional-220634) Calling .DriverName
I1218 22:50:35.282912   22070 ssh_runner.go:195] Run: systemctl --version
I1218 22:50:35.282940   22070 main.go:141] libmachine: (functional-220634) Calling .GetSSHHostname
I1218 22:50:35.285556   22070 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:35.285947   22070 main.go:141] libmachine: (functional-220634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:ac:11", ip: ""} in network mk-functional-220634: {Iface:virbr1 ExpiryTime:2023-12-18 23:47:28 +0000 UTC Type:0 Mac:52:54:00:75:ac:11 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:functional-220634 Clientid:01:52:54:00:75:ac:11}
I1218 22:50:35.285982   22070 main.go:141] libmachine: (functional-220634) DBG | domain functional-220634 has defined IP address 192.168.50.254 and MAC address 52:54:00:75:ac:11 in network mk-functional-220634
I1218 22:50:35.286094   22070 main.go:141] libmachine: (functional-220634) Calling .GetSSHPort
I1218 22:50:35.286256   22070 main.go:141] libmachine: (functional-220634) Calling .GetSSHKeyPath
I1218 22:50:35.286419   22070 main.go:141] libmachine: (functional-220634) Calling .GetSSHUsername
I1218 22:50:35.286526   22070 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/functional-220634/id_rsa Username:docker}
I1218 22:50:35.383198   22070 build_images.go:151] Building image from path: /tmp/build.1377906756.tar
I1218 22:50:35.383258   22070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 22:50:35.406333   22070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1377906756.tar
I1218 22:50:35.412905   22070 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1377906756.tar: stat -c "%s %y" /var/lib/minikube/build/build.1377906756.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1377906756.tar': No such file or directory
I1218 22:50:35.412944   22070 ssh_runner.go:362] scp /tmp/build.1377906756.tar --> /var/lib/minikube/build/build.1377906756.tar (3072 bytes)
I1218 22:50:35.446393   22070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1377906756
I1218 22:50:35.457922   22070 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1377906756 -xf /var/lib/minikube/build/build.1377906756.tar
I1218 22:50:35.479520   22070 containerd.go:378] Building image: /var/lib/minikube/build/build.1377906756
I1218 22:50:35.479594   22070 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1377906756 --local dockerfile=/var/lib/minikube/build/build.1377906756 --output type=image,name=localhost/my-image:functional-220634
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3ab0765b57cb7e3cde2d2167b7a1274c0fc358274cd31705099ee3ebafd3d298 0.0s done
#8 exporting config sha256:b43ce746f3b44dc9336f941ee6ff3e1ba346b033364fcce0ec111862ca98db63 0.0s done
#8 naming to localhost/my-image:functional-220634
#8 naming to localhost/my-image:functional-220634 done
#8 DONE 0.2s
I1218 22:50:40.340496   22070 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1377906756 --local dockerfile=/var/lib/minikube/build/build.1377906756 --output type=image,name=localhost/my-image:functional-220634: (4.860875015s)
I1218 22:50:40.340557   22070 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1377906756
I1218 22:50:40.354951   22070 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1377906756.tar
I1218 22:50:40.366156   22070 build_images.go:207] Built localhost/my-image:functional-220634 from /tmp/build.1377906756.tar
I1218 22:50:40.366187   22070 build_images.go:123] succeeded building to: functional-220634
I1218 22:50:40.366191   22070 build_images.go:124] failed building to: 
I1218 22:50:40.366208   22070 main.go:141] libmachine: Making call to close driver server
I1218 22:50:40.366218   22070 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:40.366501   22070 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
I1218 22:50:40.366537   22070 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:40.366554   22070 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 22:50:40.366573   22070 main.go:141] libmachine: Making call to close driver server
I1218 22:50:40.366583   22070 main.go:141] libmachine: (functional-220634) Calling .Close
I1218 22:50:40.366870   22070 main.go:141] libmachine: (functional-220634) DBG | Closing plugin on server side
I1218 22:50:40.366907   22070 main.go:141] libmachine: Successfully made call to close driver server
I1218 22:50:40.366923   22070 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
2023/12/18 22:50:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.644031183s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-220634
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "236.949655ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "58.28055ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr: (4.444190157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "266.767946ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "71.074172ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr: (2.553559749s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.96606011s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-220634
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image load --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr: (5.858028625s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image save gcr.io/google-containers/addon-resizer:functional-220634 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image save gcr.io/google-containers/addon-resizer:functional-220634 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.368330241s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image rm gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.405181181s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-220634
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 image save --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 image save --daemon gcr.io/google-containers/addon-resizer:functional-220634 --alsologtostderr: (2.218817655s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-220634
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-220634 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-220634 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-s5gb7" [ec5bbb5f-a699-40de-b112-3ae1f5fdbf2c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-s5gb7" [ec5bbb5f-a699-40de-b112-3ae1f5fdbf2c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.208995966s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdany-port1630818667/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702939823037162242" to /tmp/TestFunctionalparallelMountCmdany-port1630818667/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702939823037162242" to /tmp/TestFunctionalparallelMountCmdany-port1630818667/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702939823037162242" to /tmp/TestFunctionalparallelMountCmdany-port1630818667/001/test-1702939823037162242
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.463187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 22:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 22:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 22:50 test-1702939823037162242
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh cat /mount-9p/test-1702939823037162242
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-220634 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [07cb50e0-950f-4674-998d-3957e8950074] Pending
helpers_test.go:344: "busybox-mount" [07cb50e0-950f-4674-998d-3957e8950074] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [07cb50e0-950f-4674-998d-3957e8950074] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [07cb50e0-950f-4674-998d-3957e8950074] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004089524s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-220634 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh stat /mount-9p/created-by-test
E1218 22:50:31.691221   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdany-port1630818667/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 service list: (1.250174644s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-220634 service list -o json: (1.298115372s)
functional_test.go:1493: Took "1.298250254s" to run "out/minikube-linux-amd64 -p functional-220634 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.254:31868
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdspecific-port2180981096/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (226.284041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdspecific-port2180981096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "sudo umount -f /mount-9p": exit status 1 (258.177134ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-220634 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdspecific-port2180981096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.254:31868
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T" /mount1: exit status 1 (344.112884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-220634 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-220634 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-220634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1527856534/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-220634
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-220634
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-220634
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-875577 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-875577 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m32.668086649s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons enable ingress --alsologtostderr -v=5: (15.934046167s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-875577 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1218 22:52:47.848418   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-875577 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.75481874s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-875577 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-875577 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c5827339-b47c-41fa-9fa2-5b375143114c] Pending
helpers_test.go:344: "nginx" [c5827339-b47c-41fa-9fa2-5b375143114c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c5827339-b47c-41fa-9fa2-5b375143114c] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.003719161s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-875577 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.117
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons disable ingress-dns --alsologtostderr -v=1
E1218 22:53:15.531931   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons disable ingress-dns --alsologtostderr -v=1: (11.012401708s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-875577 addons disable ingress --alsologtostderr -v=1: (7.520151432s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (105.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-254285 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1218 22:54:57.848448   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:57.853714   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:57.863948   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:57.884189   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:57.924448   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:58.004727   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:58.165159   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:58.485391   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:54:59.126288   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:55:00.406777   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:55:02.968522   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:55:08.089358   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-254285 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m45.567350179s)
--- PASS: TestJSONOutput/start/Command (105.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-254285 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-254285 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-254285 --output=json --user=testUser
E1218 22:55:18.330411   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-254285 --output=json --user=testUser: (7.102166236s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-576735 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-576735 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.203955ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fab16c81-323b-4c9e-85cc-8232d3b4a37d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-576735] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9da84e8c-98c2-49c2-b929-6f2bca54bcae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"37b78b85-1905-4049-af83-1df36c2a01ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94a461fa-cb7a-4b78-a06c-6400994b24fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig"}}
	{"specversion":"1.0","id":"32bd233f-562c-446d-b43a-e9738a92fbcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube"}}
	{"specversion":"1.0","id":"4247ec6c-ce34-434d-a876-0108e27cfa1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"58557326-a675-4473-9313-502854399302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fc16278-d3b6-4b96-a605-531817534f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-576735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-576735
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (102.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-110038 --driver=kvm2  --container-runtime=containerd
E1218 22:55:38.811051   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-110038 --driver=kvm2  --container-runtime=containerd: (48.589322823s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-113043 --driver=kvm2  --container-runtime=containerd
E1218 22:56:19.772669   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-113043 --driver=kvm2  --container-runtime=containerd: (50.811180375s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-110038
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-113043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-113043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-113043
helpers_test.go:175: Cleaning up "first-110038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-110038
--- PASS: TestMinikubeProfile (102.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-426452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-426452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.394120922s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-426452 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-426452 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-442626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1218 22:57:39.919316   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:39.924641   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:39.934924   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:39.955213   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:39.995589   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:40.075964   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:40.236400   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:40.556711   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:41.197625   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:41.693186   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 22:57:42.478096   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:45.039886   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:57:47.847870   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 22:57:50.160655   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:58:00.401459   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-442626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.200980749s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-426452 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-442626
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-442626: (1.174726548s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-442626
E1218 22:58:20.881692   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-442626: (21.882446082s)
--- PASS: TestMountStart/serial/RestartStopped (22.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-442626 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (180.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-706721 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1218 22:59:01.842904   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 22:59:57.848530   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:00:23.763396   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:00:25.533696   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-706721 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m59.959115241s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (180.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-706721 -- rollout status deployment/busybox: (4.786048874s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-8xmq5 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-rx24q -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-8xmq5 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-rx24q -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-8xmq5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-rx24q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-8xmq5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-8xmq5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-rx24q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-706721 -- exec busybox-5bc68d56bd-rx24q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-706721 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-706721 -v 3 --alsologtostderr: (42.800378996s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-706721 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp testdata/cp-test.txt multinode-706721:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3565078122/001/cp-test_multinode-706721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721:/home/docker/cp-test.txt multinode-706721-m02:/home/docker/cp-test_multinode-706721_multinode-706721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test_multinode-706721_multinode-706721-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721:/home/docker/cp-test.txt multinode-706721-m03:/home/docker/cp-test_multinode-706721_multinode-706721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test_multinode-706721_multinode-706721-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp testdata/cp-test.txt multinode-706721-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3565078122/001/cp-test_multinode-706721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m02:/home/docker/cp-test.txt multinode-706721:/home/docker/cp-test_multinode-706721-m02_multinode-706721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test_multinode-706721-m02_multinode-706721.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m02:/home/docker/cp-test.txt multinode-706721-m03:/home/docker/cp-test_multinode-706721-m02_multinode-706721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test_multinode-706721-m02_multinode-706721-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp testdata/cp-test.txt multinode-706721-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3565078122/001/cp-test_multinode-706721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m03:/home/docker/cp-test.txt multinode-706721:/home/docker/cp-test_multinode-706721-m03_multinode-706721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721 "sudo cat /home/docker/cp-test_multinode-706721-m03_multinode-706721.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 cp multinode-706721-m03:/home/docker/cp-test.txt multinode-706721-m02:/home/docker/cp-test_multinode-706721-m03_multinode-706721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 ssh -n multinode-706721-m02 "sudo cat /home/docker/cp-test_multinode-706721-m03_multinode-706721-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-706721 node stop m03: (1.261726294s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-706721 status: exit status 7 (441.246365ms)

                                                
                                                
-- stdout --
	multinode-706721
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-706721-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-706721-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr: exit status 7 (433.109033ms)

                                                
                                                
-- stdout --
	multinode-706721
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-706721-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-706721-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:02:34.943143   28884 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:02:34.943266   28884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:02:34.943272   28884 out.go:309] Setting ErrFile to fd 2...
	I1218 23:02:34.943276   28884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:02:34.943443   28884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 23:02:34.943600   28884 out.go:303] Setting JSON to false
	I1218 23:02:34.943632   28884 mustload.go:65] Loading cluster: multinode-706721
	I1218 23:02:34.943662   28884 notify.go:220] Checking for updates...
	I1218 23:02:34.944072   28884 config.go:182] Loaded profile config "multinode-706721": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:02:34.944091   28884 status.go:255] checking status of multinode-706721 ...
	I1218 23:02:34.944595   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:34.944635   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:34.962475   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I1218 23:02:34.962853   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:34.963373   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:34.963397   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:34.963750   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:34.963941   28884 main.go:141] libmachine: (multinode-706721) Calling .GetState
	I1218 23:02:34.965429   28884 status.go:330] multinode-706721 host status = "Running" (err=<nil>)
	I1218 23:02:34.965441   28884 host.go:66] Checking if "multinode-706721" exists ...
	I1218 23:02:34.965711   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:34.965747   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:34.980645   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I1218 23:02:34.981040   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:34.981519   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:34.981544   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:34.981907   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:34.982068   28884 main.go:141] libmachine: (multinode-706721) Calling .GetIP
	I1218 23:02:34.984979   28884 main.go:141] libmachine: (multinode-706721) DBG | domain multinode-706721 has defined MAC address 52:54:00:70:4b:c9 in network mk-multinode-706721
	I1218 23:02:34.985397   28884 main.go:141] libmachine: (multinode-706721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:4b:c9", ip: ""} in network mk-multinode-706721: {Iface:virbr1 ExpiryTime:2023-12-18 23:58:49 +0000 UTC Type:0 Mac:52:54:00:70:4b:c9 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-706721 Clientid:01:52:54:00:70:4b:c9}
	I1218 23:02:34.985441   28884 main.go:141] libmachine: (multinode-706721) DBG | domain multinode-706721 has defined IP address 192.168.39.70 and MAC address 52:54:00:70:4b:c9 in network mk-multinode-706721
	I1218 23:02:34.985497   28884 host.go:66] Checking if "multinode-706721" exists ...
	I1218 23:02:34.985756   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:34.985795   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:34.999257   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I1218 23:02:34.999569   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:34.999985   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:35.000010   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:35.000387   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:35.000560   28884 main.go:141] libmachine: (multinode-706721) Calling .DriverName
	I1218 23:02:35.000756   28884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:02:35.000787   28884 main.go:141] libmachine: (multinode-706721) Calling .GetSSHHostname
	I1218 23:02:35.003332   28884 main.go:141] libmachine: (multinode-706721) DBG | domain multinode-706721 has defined MAC address 52:54:00:70:4b:c9 in network mk-multinode-706721
	I1218 23:02:35.003739   28884 main.go:141] libmachine: (multinode-706721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:4b:c9", ip: ""} in network mk-multinode-706721: {Iface:virbr1 ExpiryTime:2023-12-18 23:58:49 +0000 UTC Type:0 Mac:52:54:00:70:4b:c9 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-706721 Clientid:01:52:54:00:70:4b:c9}
	I1218 23:02:35.003768   28884 main.go:141] libmachine: (multinode-706721) DBG | domain multinode-706721 has defined IP address 192.168.39.70 and MAC address 52:54:00:70:4b:c9 in network mk-multinode-706721
	I1218 23:02:35.003911   28884 main.go:141] libmachine: (multinode-706721) Calling .GetSSHPort
	I1218 23:02:35.004086   28884 main.go:141] libmachine: (multinode-706721) Calling .GetSSHKeyPath
	I1218 23:02:35.004238   28884 main.go:141] libmachine: (multinode-706721) Calling .GetSSHUsername
	I1218 23:02:35.005540   28884 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/multinode-706721/id_rsa Username:docker}
	I1218 23:02:35.088009   28884 ssh_runner.go:195] Run: systemctl --version
	I1218 23:02:35.094463   28884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:02:35.107278   28884 kubeconfig.go:92] found "multinode-706721" server: "https://192.168.39.70:8443"
	I1218 23:02:35.107299   28884 api_server.go:166] Checking apiserver status ...
	I1218 23:02:35.107336   28884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:02:35.119015   28884 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	I1218 23:02:35.127882   28884 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod81bf7aa22a37de4c8b056203bb50124b/efb56bb6693c23765dade59eb7ac36a30e47b5459a6a59ad44d8a8b6ee605a3b"
	I1218 23:02:35.127924   28884 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod81bf7aa22a37de4c8b056203bb50124b/efb56bb6693c23765dade59eb7ac36a30e47b5459a6a59ad44d8a8b6ee605a3b/freezer.state
	I1218 23:02:35.136170   28884 api_server.go:204] freezer state: "THAWED"
	I1218 23:02:35.136208   28884 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1218 23:02:35.142228   28884 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1218 23:02:35.142255   28884 status.go:421] multinode-706721 apiserver status = Running (err=<nil>)
	I1218 23:02:35.142273   28884 status.go:257] multinode-706721 status: &{Name:multinode-706721 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:02:35.142298   28884 status.go:255] checking status of multinode-706721-m02 ...
	I1218 23:02:35.142745   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:35.142790   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:35.157360   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I1218 23:02:35.157783   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:35.158204   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:35.158223   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:35.158599   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:35.158771   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetState
	I1218 23:02:35.160356   28884 status.go:330] multinode-706721-m02 host status = "Running" (err=<nil>)
	I1218 23:02:35.160379   28884 host.go:66] Checking if "multinode-706721-m02" exists ...
	I1218 23:02:35.160724   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:35.160779   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:35.175021   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I1218 23:02:35.175393   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:35.175807   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:35.175831   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:35.176160   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:35.176356   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetIP
	I1218 23:02:35.179060   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | domain multinode-706721-m02 has defined MAC address 52:54:00:2c:81:a6 in network mk-multinode-706721
	I1218 23:02:35.179451   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:81:a6", ip: ""} in network mk-multinode-706721: {Iface:virbr1 ExpiryTime:2023-12-18 23:59:57 +0000 UTC Type:0 Mac:52:54:00:2c:81:a6 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-706721-m02 Clientid:01:52:54:00:2c:81:a6}
	I1218 23:02:35.179482   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | domain multinode-706721-m02 has defined IP address 192.168.39.240 and MAC address 52:54:00:2c:81:a6 in network mk-multinode-706721
	I1218 23:02:35.179593   28884 host.go:66] Checking if "multinode-706721-m02" exists ...
	I1218 23:02:35.179872   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:35.179905   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:35.193526   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37723
	I1218 23:02:35.193881   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:35.194279   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:35.194298   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:35.194596   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:35.194781   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .DriverName
	I1218 23:02:35.195004   28884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:02:35.195027   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetSSHHostname
	I1218 23:02:35.197569   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | domain multinode-706721-m02 has defined MAC address 52:54:00:2c:81:a6 in network mk-multinode-706721
	I1218 23:02:35.198004   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:81:a6", ip: ""} in network mk-multinode-706721: {Iface:virbr1 ExpiryTime:2023-12-18 23:59:57 +0000 UTC Type:0 Mac:52:54:00:2c:81:a6 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-706721-m02 Clientid:01:52:54:00:2c:81:a6}
	I1218 23:02:35.198034   28884 main.go:141] libmachine: (multinode-706721-m02) DBG | domain multinode-706721-m02 has defined IP address 192.168.39.240 and MAC address 52:54:00:2c:81:a6 in network mk-multinode-706721
	I1218 23:02:35.198177   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetSSHPort
	I1218 23:02:35.198354   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetSSHKeyPath
	I1218 23:02:35.198529   28884 main.go:141] libmachine: (multinode-706721-m02) Calling .GetSSHUsername
	I1218 23:02:35.198690   28884 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17822-6323/.minikube/machines/multinode-706721-m02/id_rsa Username:docker}
	I1218 23:02:35.290901   28884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:02:35.303766   28884 status.go:257] multinode-706721-m02 status: &{Name:multinode-706721-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:02:35.303805   28884 status.go:255] checking status of multinode-706721-m03 ...
	I1218 23:02:35.304138   28884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:02:35.304175   28884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:02:35.319641   28884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I1218 23:02:35.319984   28884 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:02:35.320504   28884 main.go:141] libmachine: Using API Version  1
	I1218 23:02:35.320532   28884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:02:35.320837   28884 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:02:35.321024   28884 main.go:141] libmachine: (multinode-706721-m03) Calling .GetState
	I1218 23:02:35.322498   28884 status.go:330] multinode-706721-m03 host status = "Stopped" (err=<nil>)
	I1218 23:02:35.322513   28884 status.go:343] host is not running, skipping remaining checks
	I1218 23:02:35.322527   28884 status.go:257] multinode-706721-m03 status: &{Name:multinode-706721-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 node start m03 --alsologtostderr
E1218 23:02:39.919226   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:02:47.848550   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-706721 node start m03 --alsologtostderr: (27.463841s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-706721
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-706721
E1218 23:03:07.605116   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:04:10.892426   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 23:04:57.848764   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-706721: (3m4.269822584s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-706721 --wait=true -v=8 --alsologtostderr
E1218 23:07:39.919313   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:07:47.847322   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-706721 --wait=true -v=8 --alsologtostderr: (2m5.660605867s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-706721
--- PASS: TestMultiNode/serial/RestartKeepsNodes (310.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-706721 node delete m03: (1.186260872s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 stop
E1218 23:09:57.848848   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-706721 stop: (3m3.048203903s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-706721 status: exit status 7 (90.536436ms)

                                                
                                                
-- stdout --
	multinode-706721
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-706721-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr: exit status 7 (91.034478ms)

                                                
                                                
-- stdout --
	multinode-706721
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-706721-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:11:18.367481   31024 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:11:18.367590   31024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:11:18.367598   31024 out.go:309] Setting ErrFile to fd 2...
	I1218 23:11:18.367603   31024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:11:18.367816   31024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 23:11:18.367969   31024 out.go:303] Setting JSON to false
	I1218 23:11:18.368004   31024 mustload.go:65] Loading cluster: multinode-706721
	I1218 23:11:18.368121   31024 notify.go:220] Checking for updates...
	I1218 23:11:18.368548   31024 config.go:182] Loaded profile config "multinode-706721": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:11:18.368568   31024 status.go:255] checking status of multinode-706721 ...
	I1218 23:11:18.369061   31024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:11:18.369146   31024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:11:18.386636   31024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40543
	I1218 23:11:18.387029   31024 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:11:18.387541   31024 main.go:141] libmachine: Using API Version  1
	I1218 23:11:18.387579   31024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:11:18.387890   31024 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:11:18.388039   31024 main.go:141] libmachine: (multinode-706721) Calling .GetState
	I1218 23:11:18.389530   31024 status.go:330] multinode-706721 host status = "Stopped" (err=<nil>)
	I1218 23:11:18.389551   31024 status.go:343] host is not running, skipping remaining checks
	I1218 23:11:18.389557   31024 status.go:257] multinode-706721 status: &{Name:multinode-706721 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:11:18.389571   31024 status.go:255] checking status of multinode-706721-m02 ...
	I1218 23:11:18.389849   31024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1218 23:11:18.389879   31024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 23:11:18.403285   31024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I1218 23:11:18.403711   31024 main.go:141] libmachine: () Calling .GetVersion
	I1218 23:11:18.404173   31024 main.go:141] libmachine: Using API Version  1
	I1218 23:11:18.404197   31024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 23:11:18.404534   31024 main.go:141] libmachine: () Calling .GetMachineName
	I1218 23:11:18.404715   31024 main.go:141] libmachine: (multinode-706721-m02) Calling .GetState
	I1218 23:11:18.406122   31024 status.go:330] multinode-706721-m02 host status = "Stopped" (err=<nil>)
	I1218 23:11:18.406134   31024 status.go:343] host is not running, skipping remaining checks
	I1218 23:11:18.406141   31024 status.go:257] multinode-706721-m02 status: &{Name:multinode-706721-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-706721 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1218 23:11:20.894363   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:12:39.919693   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:12:47.847425   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-706721 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m31.841867349s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-706721 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (92.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-706721
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-706721-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-706721-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (75.022985ms)

                                                
                                                
-- stdout --
	* [multinode-706721-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-706721-m02' is duplicated with machine name 'multinode-706721-m02' in profile 'multinode-706721'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-706721-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-706721-m03 --driver=kvm2  --container-runtime=containerd: (48.913028578s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-706721
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-706721: exit status 80 (238.562552ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-706721
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-706721-m03 already exists in multinode-706721-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-706721-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.06s)

                                                
                                    
x
+
TestPreload (445.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416915 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1218 23:14:02.966360   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:14:57.848211   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:17:39.919365   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:17:47.847893   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416915 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (4m26.556779941s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416915 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-416915 image pull gcr.io/k8s-minikube/busybox: (3.690401287s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-416915
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-416915: (1m32.077661617s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416915 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1218 23:19:57.848964   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:20:50.892937   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416915 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.051592084s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416915 image list
helpers_test.go:175: Cleaning up "test-preload-416915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-416915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-416915: (1.039958061s)
--- PASS: TestPreload (445.65s)

                                                
                                    
x
+
TestScheduledStopUnix (121.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-677613 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-677613 --memory=2048 --driver=kvm2  --container-runtime=containerd: (50.100418938s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-677613 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-677613 -n scheduled-stop-677613
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-677613 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-677613 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-677613 -n scheduled-stop-677613
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-677613
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-677613 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1218 23:22:39.918934   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:22:47.848237   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-677613
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-677613: exit status 7 (74.523703ms)

                                                
                                                
-- stdout --
	scheduled-stop-677613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-677613 -n scheduled-stop-677613
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-677613 -n scheduled-stop-677613: exit status 7 (72.873846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-677613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-677613
--- PASS: TestScheduledStopUnix (121.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (242.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2435266989.exe start -p running-upgrade-433756 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2435266989.exe start -p running-upgrade-433756 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m18.43853199s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-433756 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-433756 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m39.076169794s)
helpers_test.go:175: Cleaning up "running-upgrade-433756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-433756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-433756: (1.188483104s)
--- PASS: TestRunningBinaryUpgrade (242.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (215.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1218 23:27:39.919010   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:27:47.846757   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m48.952976417s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-743837
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-743837: (2.11952172s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-743837 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-743837 status --format={{.Host}}: exit status 7 (87.289252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.045776431s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-743837 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (113.697386ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-743837] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-743837
	    minikube start -p kubernetes-upgrade-743837 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7438372 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-743837 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1218 23:30:42.967573   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743837 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (38.936159493s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-743837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-743837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-743837: (1.692958139s)
--- PASS: TestKubernetesUpgrade (215.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.370584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-408626] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408626 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408626 --driver=kvm2  --container-runtime=containerd: (1m46.369305222s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408626 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (74.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1218 23:24:57.848439   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m13.453120322s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408626 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-408626 status -o json: exit status 2 (302.163267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-408626","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-408626
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (74.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408626 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (40.981873487s)
--- PASS: TestNoKubernetes/serial/Start (40.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408626 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408626 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.652208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.675048255s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.361779s)
--- PASS: TestNoKubernetes/serial/ProfileList (16.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-246992 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-246992 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (123.273371ms)

                                                
                                                
-- stdout --
	* [false-246992] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:27:03.033276   38710 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:27:03.033376   38710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:27:03.033380   38710 out.go:309] Setting ErrFile to fd 2...
	I1218 23:27:03.033385   38710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:27:03.033539   38710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-6323/.minikube/bin
	I1218 23:27:03.034114   38710 out.go:303] Setting JSON to false
	I1218 23:27:03.034978   38710 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4169,"bootTime":1702937854,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 23:27:03.035033   38710 start.go:138] virtualization: kvm guest
	I1218 23:27:03.037531   38710 out.go:177] * [false-246992] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 23:27:03.038978   38710 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:27:03.040403   38710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:27:03.039016   38710 notify.go:220] Checking for updates...
	I1218 23:27:03.041825   38710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-6323/kubeconfig
	I1218 23:27:03.043158   38710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-6323/.minikube
	I1218 23:27:03.044715   38710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 23:27:03.046258   38710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:27:03.048237   38710 config.go:182] Loaded profile config "NoKubernetes-408626": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1218 23:27:03.048359   38710 config.go:182] Loaded profile config "cert-expiration-177378": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I1218 23:27:03.048471   38710 config.go:182] Loaded profile config "running-upgrade-433756": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1218 23:27:03.048559   38710 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:27:03.086421   38710 out.go:177] * Using the kvm2 driver based on user configuration
	I1218 23:27:03.087880   38710 start.go:298] selected driver: kvm2
	I1218 23:27:03.087894   38710 start.go:902] validating driver "kvm2" against <nil>
	I1218 23:27:03.087906   38710 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:27:03.090196   38710 out.go:177] 
	W1218 23:27:03.091725   38710 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1218 23:27:03.093222   38710 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-246992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.38:8443
name: cert-expiration-177378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.221:8443
name: running-upgrade-433756
contexts:
- context:
cluster: cert-expiration-177378
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-177378
name: cert-expiration-177378
- context:
cluster: running-upgrade-433756
user: running-upgrade-433756
name: running-upgrade-433756
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-177378
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.key
- name: running-upgrade-433756
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-246992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-246992"

                                                
                                                
----------------------- debugLogs end: false-246992 [took: 3.035068571s] --------------------------------
helpers_test.go:175: Cleaning up "false-246992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-246992
--- PASS: TestNetworkPlugins/group/false (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-408626
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-408626: (1.353354041s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (49.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408626 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408626 --driver=kvm2  --container-runtime=containerd: (49.325561263s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (49.33s)

                                                
                                    
x
+
TestPause/serial/Start (67.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-208480 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-208480 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m7.38058788s)
--- PASS: TestPause/serial/Start (67.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408626 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408626 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.325104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1218 23:28:00.895077   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (3.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (196.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.4111117945.exe start -p stopped-upgrade-128861 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.4111117945.exe start -p stopped-upgrade-128861 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m14.294047251s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.4111117945.exe -p stopped-upgrade-128861 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.4111117945.exe -p stopped-upgrade-128861 stop: (1.379024394s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-128861 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-128861 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m0.459236695s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (196.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-208480 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-208480 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (34.892301188s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-208480 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-208480 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-208480 --output=json --layout=cluster: exit status 2 (284.33982ms)

                                                
                                                
-- stdout --
	{"Name":"pause-208480","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-208480","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-208480 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-208480 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-208480 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-208480 --alsologtostderr -v=5: (1.502996222s)
--- PASS: TestPause/serial/DeletePaused (1.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m46.769705115s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E1218 23:29:57.848384   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m33.430847434s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8hjx4" [b9af04ba-4fc5-4a7b-9dfd-155b91280bb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8hjx4" [b9af04ba-4fc5-4a7b-9dfd-155b91280bb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.007018132s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m42.974676058s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m30.190045369s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-128861
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-128861: (3.840116529s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p9mmp" [311181e7-407d-4b6f-95a1-3c3cf7562b1b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005610684s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-246992 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-linux-amd64 ssh -p kindnet-246992 "pgrep -a kubelet": (1.166214183s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fn644" [41a699de-201b-4f46-aaa5-1e6bdbd4de5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fn644" [41a699de-201b-4f46-aaa5-1e6bdbd4de5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006157621s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m0.96234082s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (103.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m43.095829521s)
--- PASS: TestNetworkPlugins/group/flannel/Start (103.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rnjpb" [c7f75085-b821-4bf8-950d-182ee98861d1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005345286s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-htqqt" [d4d72fba-0a0a-46cd-9dd7-30b0815c00fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 23:32:39.919417   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-htqqt" [d4d72fba-0a0a-46cd-9dd7-30b0815c00fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006512105s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4cwzd" [d411cb20-4247-4a7f-a12d-e3f638fe492b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4cwzd" [d411cb20-4247-4a7f-a12d-e3f638fe492b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005296575s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-246992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m43.854010584s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m33.651480363s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mc54t" [c8492e07-bae7-4784-abf2-3a36b123df00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mc54t" [c8492e07-bae7-4784-abf2-3a36b123df00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.00424137s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tvswt" [bc7f008a-6694-4fda-8f1d-b679343292fa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005812683s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kkbt8" [47f0d0be-75c9-48a2-beed-3bda3ed5585e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kkbt8" [47f0d0be-75c9-48a2-beed-3bda3ed5585e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00513938s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (150.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-364511 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-364511 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (2m30.077406061s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (150.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (118.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-554889 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-554889 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m58.83767701s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (118.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-246992 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-246992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tvgpc" [7219ba9a-2f37-4df6-a168-d8d6aacb5f0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tvgpc" [7219ba9a-2f37-4df6-a168-d8d6aacb5f0d] Running
E1218 23:34:57.848177   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004031419s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-246992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-246992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1218 23:44:03.631679   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414735 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1218 23:35:45.618993   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.624345   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.634624   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.655425   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.696155   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.776903   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:45.937759   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:46.258043   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:46.898836   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:35:48.179483   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414735 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m44.176777192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (15.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-073675 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee599e09-0aa7-4f53-94ea-25e71b630838] Pending
E1218 23:35:50.740586   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ee599e09-0aa7-4f53-94ea-25e71b630838] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1218 23:35:55.861488   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ee599e09-0aa7-4f53-94ea-25e71b630838] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 15.004312204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-073675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (15.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-073675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-073675 describe deploy/metrics-server -n kube-system
E1218 23:36:06.102176   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-073675 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-073675 --alsologtostderr -v=3: (1m32.520402388s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-554889 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [18c96190-6304-4240-8ec3-a73a3b34dd20] Pending
helpers_test.go:344: "busybox" [18c96190-6304-4240-8ec3-a73a3b34dd20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [18c96190-6304-4240-8ec3-a73a3b34dd20] Running
E1218 23:36:26.582986   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:36:26.983678   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:26.988955   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:26.999245   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:27.019498   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:27.059771   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:27.140108   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:27.300876   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:27.621891   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:28.263075   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:36:29.543267   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004273263s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-554889 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-554889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1218 23:36:32.104130   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-554889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.145537615s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-554889 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-554889 --alsologtostderr -v=3
E1218 23:36:37.224660   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-554889 --alsologtostderr -v=3: (1m32.341374191s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-364511 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1041073e-527b-4a3f-91c5-ce1525f63314] Pending
helpers_test.go:344: "busybox" [1041073e-527b-4a3f-91c5-ce1525f63314] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1041073e-527b-4a3f-91c5-ce1525f63314] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004449163s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-364511 exec busybox -- /bin/sh -c "ulimit -n"
E1218 23:36:47.465649   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-364511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-364511 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-364511 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-364511 --alsologtostderr -v=3: (1m32.318609694s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-414735 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e21d82ef-df02-49b2-aba6-7215467d9b3c] Pending
helpers_test.go:344: "busybox" [e21d82ef-df02-49b2-aba6-7215467d9b3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1218 23:37:07.543643   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:37:07.946855   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e21d82ef-df02-49b2-aba6-7215467d9b3c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00408818s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-414735 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030525375s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-414735 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-414735 --alsologtostderr -v=3
E1218 23:37:29.926452   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:29.931698   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:29.941921   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:29.962181   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:30.002454   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:30.082775   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:30.243306   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:30.564245   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:30.893428   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 23:37:31.205028   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:32.485926   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:35.046192   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-414735 --alsologtostderr -v=3: (1m31.787213126s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073675 -n old-k8s-version-073675
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073675 -n old-k8s-version-073675: exit status 7 (73.852364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-073675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (388.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-073675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1218 23:37:39.919273   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:37:40.166943   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:45.918332   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:45.923652   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:45.933862   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:45.954291   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:45.994595   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:46.075042   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:46.235641   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:46.556752   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:47.197610   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:47.847616   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 23:37:48.478010   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:48.907579   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:37:50.408057   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:37:51.038534   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:37:56.159053   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-073675 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m28.396377332s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-073675 -n old-k8s-version-073675
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (388.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-554889 -n embed-certs-554889
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-554889 -n embed-certs-554889: exit status 7 (85.453943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-554889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-554889 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1218 23:38:06.399705   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:38:10.889004   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-554889 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m9.157020455s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-554889 -n embed-certs-554889
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364511 -n no-preload-364511
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364511 -n no-preload-364511: exit status 7 (77.66362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-364511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-364511 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1218 23:38:26.880636   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:38:29.464309   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:38:35.946162   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:35.951509   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:35.961814   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:35.982095   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:36.022588   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:36.103695   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:36.264474   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:36.585309   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:37.226188   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:38.507176   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:41.067602   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:46.187779   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:46.213988   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:46.219259   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:46.229551   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:46.249801   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:46.290399   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:46.371027   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-364511 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m13.556576688s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-364511 -n no-preload-364511
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
E1218 23:38:46.531984   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735: exit status 7 (94.1911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-414735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (365.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414735 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E1218 23:38:46.852620   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:47.493264   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:48.773678   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:51.334836   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:38:51.850118   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:38:56.428766   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:38:56.455974   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:39:06.696422   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:39:07.841332   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:39:10.828428   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:39:16.909240   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:39:27.176867   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:39:50.330426   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.335710   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.346043   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.366303   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.406652   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.487058   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.647528   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:50.967677   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:51.607843   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:52.888721   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:55.449264   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:39:57.848438   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:39:57.869596   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:40:00.570318   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:40:08.137517   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:40:10.811424   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:40:13.770336   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:40:29.761821   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:40:31.292356   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:40:45.617996   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:41:12.253509   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:41:13.305317   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/auto-246992/client.crt: no such file or directory
E1218 23:41:19.790754   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
E1218 23:41:26.983112   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:41:30.057978   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
E1218 23:41:54.669494   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/kindnet-246992/client.crt: no such file or directory
E1218 23:42:29.925917   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:42:34.174683   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
E1218 23:42:39.919544   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/ingress-addon-legacy-875577/client.crt: no such file or directory
E1218 23:42:45.918639   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
E1218 23:42:47.847066   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/addons-522125/client.crt: no such file or directory
E1218 23:42:57.611278   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/calico-246992/client.crt: no such file or directory
E1218 23:43:13.602198   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/custom-flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414735 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (6m5.591288529s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (365.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2zdrp" [41ac7ff1-3a6e-4494-8f3e-370eb1c08dd5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006392926s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2zdrp" [41ac7ff1-3a6e-4494-8f3e-370eb1c08dd5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004930228s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-554889 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-554889 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-554889 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-554889 -n embed-certs-554889
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-554889 -n embed-certs-554889: exit status 2 (269.502508ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-554889 -n embed-certs-554889
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-554889 -n embed-certs-554889: exit status 2 (273.546124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-554889 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-554889 -n embed-certs-554889
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-554889 -n embed-certs-554889
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-317609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-317609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m7.032839313s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mxf2n" [49bb1bd4-a400-40b4-b043-cc176c922dcd] Running
E1218 23:43:35.946772   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/enable-default-cni-246992/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003883081s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mxf2n" [49bb1bd4-a400-40b4-b043-cc176c922dcd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004443337s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-364511 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-364511 image list --format=json
E1218 23:43:46.214319   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-364511 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364511 -n no-preload-364511
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364511 -n no-preload-364511: exit status 2 (258.754797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364511 -n no-preload-364511
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364511 -n no-preload-364511: exit status 2 (262.345624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-364511 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-364511 -n no-preload-364511
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-364511 -n no-preload-364511
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7kxpb" [d2080569-51a7-4940-a3ce-0182233de857] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004144227s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7kxpb" [d2080569-51a7-4940-a3ce-0182233de857] Running
E1218 23:44:13.899128   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/flannel-246992/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004733996s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-073675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-073675 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-073675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073675 -n old-k8s-version-073675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073675 -n old-k8s-version-073675: exit status 2 (265.766674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073675 -n old-k8s-version-073675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073675 -n old-k8s-version-073675: exit status 2 (276.694371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-073675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-073675 -n old-k8s-version-073675
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-073675 -n old-k8s-version-073675
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-317609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-317609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.540919899s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-317609 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-317609 --alsologtostderr -v=3: (2.106640901s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-317609 -n newest-cni-317609
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-317609 -n newest-cni-317609: exit status 7 (72.136122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-317609 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (43.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-317609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E1218 23:44:40.895661   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
E1218 23:44:50.330791   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/bridge-246992/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-317609 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (43.398286745s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-317609 -n newest-cni-317609
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (43.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-678dp" [728b1c0d-6402-4001-9c65-3d781ca07c2a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1218 23:44:57.848501   13608 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/functional-220634/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-678dp" [728b1c0d-6402-4001-9c65-3d781ca07c2a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005127688s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-678dp" [728b1c0d-6402-4001-9c65-3d781ca07c2a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00503312s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-414735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414735 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-414735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735: exit status 2 (266.111741ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735: exit status 2 (271.750146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-414735 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414735 -n default-k8s-diff-port-414735
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-317609 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-317609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-317609 -n newest-cni-317609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-317609 -n newest-cni-317609: exit status 2 (247.669524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-317609 -n newest-cni-317609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-317609 -n newest-cni-317609: exit status 2 (249.341422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-317609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-317609 -n newest-cni-317609
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-317609 -n newest-cni-317609
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                    

Test skip (39/313)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
247 TestNetworkPlugins/group/kubenet 3.34
255 TestNetworkPlugins/group/cilium 4
265 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-246992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.38:8443
name: cert-expiration-177378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.221:8443
name: running-upgrade-433756
contexts:
- context:
cluster: cert-expiration-177378
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-177378
name: cert-expiration-177378
- context:
cluster: running-upgrade-433756
user: running-upgrade-433756
name: running-upgrade-433756
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-177378
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.key
- name: running-upgrade-433756
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-246992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-246992"

                                                
                                                
----------------------- debugLogs end: kubenet-246992 [took: 3.192732783s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-246992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-246992
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-246992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-246992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.38:8443
name: cert-expiration-177378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-6323/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.221:8443
name: running-upgrade-433756
contexts:
- context:
cluster: cert-expiration-177378
extensions:
- extension:
last-update: Mon, 18 Dec 2023 23:26:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-177378
name: cert-expiration-177378
- context:
cluster: running-upgrade-433756
user: running-upgrade-433756
name: running-upgrade-433756
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-177378
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/cert-expiration-177378/client.key
- name: running-upgrade-433756
user:
client-certificate: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.crt
client-key: /home/jenkins/minikube-integration/17822-6323/.minikube/profiles/running-upgrade-433756/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-246992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-246992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-246992"

                                                
                                                
----------------------- debugLogs end: cilium-246992 [took: 3.849002045s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-246992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-246992
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-323185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-323185
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard