Test Report: KVM_Linux_containerd 17967

                    
                      10ecd0aeb1ec35670d13066c60edb6e287060cba:2024-01-16:32725
                    
                

Test fail (1/318)

Order failed test Duration
45 TestAddons/parallel/Headlamp 2.98
x
+
TestAddons/parallel/Headlamp (2.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-874655 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-874655 --alsologtostderr -v=1: exit status 11 (446.701849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 01:58:56.697414  567664 out.go:296] Setting OutFile to fd 1 ...
	I0116 01:58:56.697703  567664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:58:56.697739  567664 out.go:309] Setting ErrFile to fd 2...
	I0116 01:58:56.697756  567664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:58:56.698209  567664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 01:58:56.698983  567664 mustload.go:65] Loading cluster: addons-874655
	I0116 01:58:56.700123  567664 config.go:182] Loaded profile config "addons-874655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 01:58:56.700216  567664 addons.go:597] checking whether the cluster is paused
	I0116 01:58:56.700414  567664 config.go:182] Loaded profile config "addons-874655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 01:58:56.700444  567664 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:58:56.700932  567664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:58:56.701016  567664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:58:56.718178  567664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0116 01:58:56.718826  567664 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:58:56.719588  567664 main.go:141] libmachine: Using API Version  1
	I0116 01:58:56.719628  567664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:58:56.720059  567664 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:58:56.720297  567664 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:58:56.722162  567664 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:58:56.722405  567664 ssh_runner.go:195] Run: systemctl --version
	I0116 01:58:56.722432  567664 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:58:56.725228  567664 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:58:56.725641  567664 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:58:56.725671  567664 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:58:56.725880  567664 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:58:56.726084  567664 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:58:56.726245  567664 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:58:56.726375  567664 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:58:56.823655  567664 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 01:58:56.823783  567664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 01:58:56.995494  567664 cri.go:89] found id: "16947755a849854d470eec15f1b664507b182e454a95f6384259b72d350ed5de"
	I0116 01:58:56.995533  567664 cri.go:89] found id: "663d110f79a98635d89c2124f9dbe23bdc3d85b72d2ef6d4db4c0015d3694ea9"
	I0116 01:58:56.995540  567664 cri.go:89] found id: "df9bc4ef29d64fc48a86ed8a719a28c52f3da87cfc9974fa19cf5ba53c284c37"
	I0116 01:58:56.995554  567664 cri.go:89] found id: "99c04aa208a9bf8fa94d95d746bcf9052e58d94752410c4a30e17d697ec0023a"
	I0116 01:58:56.995565  567664 cri.go:89] found id: "a5e850fb620ec4e6d1050a98439e6cbe87d0338e7dba10b43303c818728d1b55"
	I0116 01:58:56.995584  567664 cri.go:89] found id: "e0b01d710aeeffd522161c14300e74a8c1e07185e894a1c7e36a3f1176ded9d7"
	I0116 01:58:56.995589  567664 cri.go:89] found id: "7e2bc7e862303c9ed6e05aa07d5db99aefb5c94560d66364e6005435394fe810"
	I0116 01:58:56.995594  567664 cri.go:89] found id: "800b4b42fb15c96952192dc78163d761adebb6b3becd91f7ba978806b5ecc303"
	I0116 01:58:56.995599  567664 cri.go:89] found id: "d66d56ecb50b42bbcc3b9b929d31bee9dc7349843f2c2d8cd594034422698789"
	I0116 01:58:56.995612  567664 cri.go:89] found id: "ed9f7c3a5daec08e50f3fd5c12a75b999bddbd33aed28f71dcc04b74c4aa4290"
	I0116 01:58:56.995617  567664 cri.go:89] found id: "4930fa202417a75505a676a94bf1227c97a92e5efb2ae3cdc1be2f0f459de767"
	I0116 01:58:56.995622  567664 cri.go:89] found id: "8cdab95870928b77f3d3419ba215529a377dfae706207c702a56422a96e57fe8"
	I0116 01:58:56.995627  567664 cri.go:89] found id: "01577e925846a47d1fc5c5214aca11e9a139348d4be81979671f92977e3e0503"
	I0116 01:58:56.995634  567664 cri.go:89] found id: "0185312efa995d56c35f389356765d2ec3d516115482241acb480471655499b8"
	I0116 01:58:56.995639  567664 cri.go:89] found id: "d495d116f12b362205c7493e1405782f1b5d45d0a8b74b6243901138efc14ac0"
	I0116 01:58:56.995645  567664 cri.go:89] found id: "b6c383016f6feb534657bde7edf8af0ffe2f72fd3f6cac8f6e51154801314ec8"
	I0116 01:58:56.995650  567664 cri.go:89] found id: "e77edeeac5c6cbe28cfdcf68a4c17e7add5effa37e0d36432844374b617f58cb"
	I0116 01:58:56.995659  567664 cri.go:89] found id: "d3b48cbdf9598c266af4e49abf8b012e38f15d1a33f818222bc4ad874add39c0"
	I0116 01:58:56.995668  567664 cri.go:89] found id: "a401d2fb30cdee44bc8f2d539cd9e4a6b44f3dd6933a62fac33ce544e6a90d21"
	I0116 01:58:56.995673  567664 cri.go:89] found id: "138e7cf6410ad1a952ff22471fbad10e7fc87031924284c46e86f2b6c7c9cb7c"
	I0116 01:58:56.995682  567664 cri.go:89] found id: ""
	I0116 01:58:56.995758  567664 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0116 01:58:57.055098  567664 main.go:141] libmachine: Making call to close driver server
	I0116 01:58:57.055130  567664 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:58:57.055474  567664 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:58:57.055509  567664 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:58:57.055542  567664 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:58:57.058095  567664 out.go:177] 
	W0116 01:58:57.059599  567664 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T01:58:57Z" level=error msg="stat /run/containerd/runc/k8s.io/7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T01:58:57Z" level=error msg="stat /run/containerd/runc/k8s.io/7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3: no such file or directory"
	
	W0116 01:58:57.059626  567664 out.go:239] * 
	* 
	W0116 01:58:57.063459  567664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 01:58:57.065057  567664 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-874655 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-874655 -n addons-874655
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 logs -n 25: (1.615242761s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-599955              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| delete  | -p download-only-599955              | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| start   | -o=json --download-only              | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-772119              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| delete  | -p download-only-772119              | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| start   | -o=json --download-only              | download-only-542475 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-542475              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| delete  | -p download-only-542475              | download-only-542475 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| delete  | -p download-only-599955              | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| delete  | -p download-only-772119              | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| delete  | -p download-only-542475              | download-only-542475 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| start   | --download-only -p                   | binary-mirror-142069 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC |                     |
	|         | binary-mirror-142069                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45435               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-142069              | binary-mirror-142069 | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:56 UTC |
	| addons  | enable dashboard -p                  | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC |                     |
	|         | addons-874655                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC |                     |
	|         | addons-874655                        |                      |         |         |                     |                     |
	| start   | -p addons-874655 --wait=true         | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:56 UTC | 16 Jan 24 01:58 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC | 16 Jan 24 01:58 UTC |
	|         | addons-874655                        |                      |         |         |                     |                     |
	| addons  | addons-874655 addons                 | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC | 16 Jan 24 01:58 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-874655 ip                     | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC | 16 Jan 24 01:58 UTC |
	| addons  | addons-874655 addons disable         | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC | 16 Jan 24 01:58 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC | 16 Jan 24 01:58 UTC |
	|         | -p addons-874655                     |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-874655        | jenkins | v1.32.0 | 16 Jan 24 01:58 UTC |                     |
	|         | -p addons-874655                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:56:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:56:13.111496  566425 out.go:296] Setting OutFile to fd 1 ...
	I0116 01:56:13.111643  566425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:56:13.111654  566425 out.go:309] Setting ErrFile to fd 2...
	I0116 01:56:13.111660  566425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:56:13.111911  566425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 01:56:13.112630  566425 out.go:303] Setting JSON to false
	I0116 01:56:13.113688  566425 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9516,"bootTime":1705360657,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 01:56:13.113766  566425 start.go:138] virtualization: kvm guest
	I0116 01:56:13.115977  566425 out.go:177] * [addons-874655] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 01:56:13.117501  566425 notify.go:220] Checking for updates...
	I0116 01:56:13.117512  566425 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 01:56:13.119027  566425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 01:56:13.120624  566425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 01:56:13.121902  566425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:56:13.123265  566425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 01:56:13.125286  566425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 01:56:13.126921  566425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:56:13.160991  566425 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 01:56:13.162308  566425 start.go:298] selected driver: kvm2
	I0116 01:56:13.162322  566425 start.go:902] validating driver "kvm2" against <nil>
	I0116 01:56:13.162335  566425 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 01:56:13.163030  566425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:56:13.163115  566425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-558382/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 01:56:13.178431  566425 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 01:56:13.178519  566425 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:56:13.178759  566425 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 01:56:13.178814  566425 cni.go:84] Creating CNI manager for ""
	I0116 01:56:13.178827  566425 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:56:13.178837  566425 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:56:13.178862  566425 start_flags.go:321] config:
	{Name:addons-874655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-874655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:56:13.178995  566425 iso.go:125] acquiring lock: {Name:mkfcdc81fb6f1fb9928eb379c0846826cfbbc8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:56:13.181052  566425 out.go:177] * Starting control plane node addons-874655 in cluster addons-874655
	I0116 01:56:13.182740  566425 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 01:56:13.182783  566425 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0116 01:56:13.182793  566425 cache.go:56] Caching tarball of preloaded images
	I0116 01:56:13.182887  566425 preload.go:174] Found /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 01:56:13.182898  566425 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0116 01:56:13.183226  566425 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/config.json ...
	I0116 01:56:13.183266  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/config.json: {Name:mkb36875b95f748e3f30ba7fc700508c0ac4155c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:13.183416  566425 start.go:365] acquiring machines lock for addons-874655: {Name:mkd5c5db31d082afc8998bd8dff7207d790937ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 01:56:13.183458  566425 start.go:369] acquired machines lock for "addons-874655" in 30.132µs
	I0116 01:56:13.183480  566425 start.go:93] Provisioning new machine with config: &{Name:addons-874655 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-874655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 01:56:13.183559  566425 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 01:56:13.185231  566425 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 01:56:13.185388  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:56:13.185436  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:56:13.200744  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0116 01:56:13.201341  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:56:13.202025  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:56:13.202049  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:56:13.202541  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:56:13.202778  566425 main.go:141] libmachine: (addons-874655) Calling .GetMachineName
	I0116 01:56:13.202961  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:13.203163  566425 start.go:159] libmachine.API.Create for "addons-874655" (driver="kvm2")
	I0116 01:56:13.203206  566425 client.go:168] LocalClient.Create starting
	I0116 01:56:13.203259  566425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem
	I0116 01:56:13.300876  566425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/cert.pem
	I0116 01:56:13.414340  566425 main.go:141] libmachine: Running pre-create checks...
	I0116 01:56:13.414372  566425 main.go:141] libmachine: (addons-874655) Calling .PreCreateCheck
	I0116 01:56:13.415006  566425 main.go:141] libmachine: (addons-874655) Calling .GetConfigRaw
	I0116 01:56:13.415503  566425 main.go:141] libmachine: Creating machine...
	I0116 01:56:13.415518  566425 main.go:141] libmachine: (addons-874655) Calling .Create
	I0116 01:56:13.415698  566425 main.go:141] libmachine: (addons-874655) Creating KVM machine...
	I0116 01:56:13.417316  566425 main.go:141] libmachine: (addons-874655) DBG | found existing default KVM network
	I0116 01:56:13.418580  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:13.418361  566447 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0116 01:56:13.424406  566425 main.go:141] libmachine: (addons-874655) DBG | trying to create private KVM network mk-addons-874655 192.168.39.0/24...
	I0116 01:56:13.503343  566425 main.go:141] libmachine: (addons-874655) Setting up store path in /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655 ...
	I0116 01:56:13.503386  566425 main.go:141] libmachine: (addons-874655) DBG | private KVM network mk-addons-874655 192.168.39.0/24 created
	I0116 01:56:13.503402  566425 main.go:141] libmachine: (addons-874655) Building disk image from file:///home/jenkins/minikube-integration/17967-558382/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 01:56:13.503480  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:13.503238  566447 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:56:13.503531  566425 main.go:141] libmachine: (addons-874655) Downloading /home/jenkins/minikube-integration/17967-558382/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17967-558382/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 01:56:13.754069  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:13.753933  566447 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa...
	I0116 01:56:14.040768  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:14.040592  566447 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/addons-874655.rawdisk...
	I0116 01:56:14.040812  566425 main.go:141] libmachine: (addons-874655) DBG | Writing magic tar header
	I0116 01:56:14.040826  566425 main.go:141] libmachine: (addons-874655) DBG | Writing SSH key tar header
	I0116 01:56:14.040838  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:14.040726  566447 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655 ...
	I0116 01:56:14.040850  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655
	I0116 01:56:14.040910  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655 (perms=drwx------)
	I0116 01:56:14.040946  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins/minikube-integration/17967-558382/.minikube/machines (perms=drwxr-xr-x)
	I0116 01:56:14.040957  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-558382/.minikube/machines
	I0116 01:56:14.040968  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:56:14.040975  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17967-558382
	I0116 01:56:14.040984  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 01:56:14.040994  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home/jenkins
	I0116 01:56:14.041002  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins/minikube-integration/17967-558382/.minikube (perms=drwxr-xr-x)
	I0116 01:56:14.041014  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins/minikube-integration/17967-558382 (perms=drwxrwxr-x)
	I0116 01:56:14.041028  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 01:56:14.041034  566425 main.go:141] libmachine: (addons-874655) DBG | Checking permissions on dir: /home
	I0116 01:56:14.041043  566425 main.go:141] libmachine: (addons-874655) DBG | Skipping /home - not owner
	I0116 01:56:14.041051  566425 main.go:141] libmachine: (addons-874655) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 01:56:14.041057  566425 main.go:141] libmachine: (addons-874655) Creating domain...
	I0116 01:56:14.042541  566425 main.go:141] libmachine: (addons-874655) define libvirt domain using xml: 
	I0116 01:56:14.042575  566425 main.go:141] libmachine: (addons-874655) <domain type='kvm'>
	I0116 01:56:14.042583  566425 main.go:141] libmachine: (addons-874655)   <name>addons-874655</name>
	I0116 01:56:14.042592  566425 main.go:141] libmachine: (addons-874655)   <memory unit='MiB'>4000</memory>
	I0116 01:56:14.042598  566425 main.go:141] libmachine: (addons-874655)   <vcpu>2</vcpu>
	I0116 01:56:14.042603  566425 main.go:141] libmachine: (addons-874655)   <features>
	I0116 01:56:14.042609  566425 main.go:141] libmachine: (addons-874655)     <acpi/>
	I0116 01:56:14.042614  566425 main.go:141] libmachine: (addons-874655)     <apic/>
	I0116 01:56:14.042620  566425 main.go:141] libmachine: (addons-874655)     <pae/>
	I0116 01:56:14.042631  566425 main.go:141] libmachine: (addons-874655)     
	I0116 01:56:14.042641  566425 main.go:141] libmachine: (addons-874655)   </features>
	I0116 01:56:14.042651  566425 main.go:141] libmachine: (addons-874655)   <cpu mode='host-passthrough'>
	I0116 01:56:14.042663  566425 main.go:141] libmachine: (addons-874655)   
	I0116 01:56:14.042669  566425 main.go:141] libmachine: (addons-874655)   </cpu>
	I0116 01:56:14.042721  566425 main.go:141] libmachine: (addons-874655)   <os>
	I0116 01:56:14.042766  566425 main.go:141] libmachine: (addons-874655)     <type>hvm</type>
	I0116 01:56:14.042855  566425 main.go:141] libmachine: (addons-874655)     <boot dev='cdrom'/>
	I0116 01:56:14.042878  566425 main.go:141] libmachine: (addons-874655)     <boot dev='hd'/>
	I0116 01:56:14.042888  566425 main.go:141] libmachine: (addons-874655)     <bootmenu enable='no'/>
	I0116 01:56:14.042958  566425 main.go:141] libmachine: (addons-874655)   </os>
	I0116 01:56:14.042985  566425 main.go:141] libmachine: (addons-874655)   <devices>
	I0116 01:56:14.042997  566425 main.go:141] libmachine: (addons-874655)     <disk type='file' device='cdrom'>
	I0116 01:56:14.043012  566425 main.go:141] libmachine: (addons-874655)       <source file='/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/boot2docker.iso'/>
	I0116 01:56:14.043023  566425 main.go:141] libmachine: (addons-874655)       <target dev='hdc' bus='scsi'/>
	I0116 01:56:14.043029  566425 main.go:141] libmachine: (addons-874655)       <readonly/>
	I0116 01:56:14.043035  566425 main.go:141] libmachine: (addons-874655)     </disk>
	I0116 01:56:14.043044  566425 main.go:141] libmachine: (addons-874655)     <disk type='file' device='disk'>
	I0116 01:56:14.043052  566425 main.go:141] libmachine: (addons-874655)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 01:56:14.043062  566425 main.go:141] libmachine: (addons-874655)       <source file='/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/addons-874655.rawdisk'/>
	I0116 01:56:14.043070  566425 main.go:141] libmachine: (addons-874655)       <target dev='hda' bus='virtio'/>
	I0116 01:56:14.043076  566425 main.go:141] libmachine: (addons-874655)     </disk>
	I0116 01:56:14.043087  566425 main.go:141] libmachine: (addons-874655)     <interface type='network'>
	I0116 01:56:14.043096  566425 main.go:141] libmachine: (addons-874655)       <source network='mk-addons-874655'/>
	I0116 01:56:14.043102  566425 main.go:141] libmachine: (addons-874655)       <model type='virtio'/>
	I0116 01:56:14.043109  566425 main.go:141] libmachine: (addons-874655)     </interface>
	I0116 01:56:14.043116  566425 main.go:141] libmachine: (addons-874655)     <interface type='network'>
	I0116 01:56:14.043122  566425 main.go:141] libmachine: (addons-874655)       <source network='default'/>
	I0116 01:56:14.043130  566425 main.go:141] libmachine: (addons-874655)       <model type='virtio'/>
	I0116 01:56:14.043135  566425 main.go:141] libmachine: (addons-874655)     </interface>
	I0116 01:56:14.043140  566425 main.go:141] libmachine: (addons-874655)     <serial type='pty'>
	I0116 01:56:14.043146  566425 main.go:141] libmachine: (addons-874655)       <target port='0'/>
	I0116 01:56:14.043151  566425 main.go:141] libmachine: (addons-874655)     </serial>
	I0116 01:56:14.043157  566425 main.go:141] libmachine: (addons-874655)     <console type='pty'>
	I0116 01:56:14.043162  566425 main.go:141] libmachine: (addons-874655)       <target type='serial' port='0'/>
	I0116 01:56:14.043168  566425 main.go:141] libmachine: (addons-874655)     </console>
	I0116 01:56:14.043174  566425 main.go:141] libmachine: (addons-874655)     <rng model='virtio'>
	I0116 01:56:14.043181  566425 main.go:141] libmachine: (addons-874655)       <backend model='random'>/dev/random</backend>
	I0116 01:56:14.043186  566425 main.go:141] libmachine: (addons-874655)     </rng>
	I0116 01:56:14.043191  566425 main.go:141] libmachine: (addons-874655)     
	I0116 01:56:14.043200  566425 main.go:141] libmachine: (addons-874655)     
	I0116 01:56:14.043215  566425 main.go:141] libmachine: (addons-874655)   </devices>
	I0116 01:56:14.043226  566425 main.go:141] libmachine: (addons-874655) </domain>
	I0116 01:56:14.043236  566425 main.go:141] libmachine: (addons-874655) 
	I0116 01:56:14.049639  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:ca:2d:41 in network default
	I0116 01:56:14.050388  566425 main.go:141] libmachine: (addons-874655) Ensuring networks are active...
	I0116 01:56:14.050425  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:14.051190  566425 main.go:141] libmachine: (addons-874655) Ensuring network default is active
	I0116 01:56:14.051539  566425 main.go:141] libmachine: (addons-874655) Ensuring network mk-addons-874655 is active
	I0116 01:56:14.052732  566425 main.go:141] libmachine: (addons-874655) Getting domain xml...
	I0116 01:56:14.053649  566425 main.go:141] libmachine: (addons-874655) Creating domain...
	I0116 01:56:15.468883  566425 main.go:141] libmachine: (addons-874655) Waiting to get IP...
	I0116 01:56:15.469662  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:15.470350  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:15.470433  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:15.470226  566447 retry.go:31] will retry after 201.592692ms: waiting for machine to come up
	I0116 01:56:15.673965  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:15.674459  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:15.674504  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:15.674402  566447 retry.go:31] will retry after 318.533962ms: waiting for machine to come up
	I0116 01:56:15.994773  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:15.995206  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:15.995240  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:15.995064  566447 retry.go:31] will retry after 357.642173ms: waiting for machine to come up
	I0116 01:56:16.354496  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:16.354919  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:16.354944  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:16.354884  566447 retry.go:31] will retry after 540.471927ms: waiting for machine to come up
	I0116 01:56:16.896607  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:16.897026  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:16.897085  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:16.896928  566447 retry.go:31] will retry after 516.648656ms: waiting for machine to come up
	I0116 01:56:17.415527  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:17.416020  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:17.416053  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:17.415970  566447 retry.go:31] will retry after 752.357803ms: waiting for machine to come up
	I0116 01:56:18.169953  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:18.170386  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:18.170415  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:18.170340  566447 retry.go:31] will retry after 1.025821607s: waiting for machine to come up
	I0116 01:56:19.197960  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:19.198424  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:19.198456  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:19.198331  566447 retry.go:31] will retry after 1.395022217s: waiting for machine to come up
	I0116 01:56:20.594874  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:20.595429  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:20.595458  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:20.595388  566447 retry.go:31] will retry after 1.762310198s: waiting for machine to come up
	I0116 01:56:22.360658  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:22.361091  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:22.361124  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:22.361040  566447 retry.go:31] will retry after 1.67444468s: waiting for machine to come up
	I0116 01:56:24.037818  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:24.038267  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:24.038302  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:24.038240  566447 retry.go:31] will retry after 2.30436737s: waiting for machine to come up
	I0116 01:56:26.345279  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:26.345813  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:26.345843  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:26.345733  566447 retry.go:31] will retry after 3.61922858s: waiting for machine to come up
	I0116 01:56:29.967283  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:29.967846  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:29.967872  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:29.967819  566447 retry.go:31] will retry after 3.436951748s: waiting for machine to come up
	I0116 01:56:33.406531  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:33.406979  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find current IP address of domain addons-874655 in network mk-addons-874655
	I0116 01:56:33.407005  566425 main.go:141] libmachine: (addons-874655) DBG | I0116 01:56:33.406918  566447 retry.go:31] will retry after 4.528742746s: waiting for machine to come up
	I0116 01:56:37.941743  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:37.942315  566425 main.go:141] libmachine: (addons-874655) Found IP for machine: 192.168.39.252
	I0116 01:56:37.942351  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has current primary IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:37.942360  566425 main.go:141] libmachine: (addons-874655) Reserving static IP address...
	I0116 01:56:37.942767  566425 main.go:141] libmachine: (addons-874655) DBG | unable to find host DHCP lease matching {name: "addons-874655", mac: "52:54:00:1d:f8:62", ip: "192.168.39.252"} in network mk-addons-874655
	I0116 01:56:38.022371  566425 main.go:141] libmachine: (addons-874655) DBG | Getting to WaitForSSH function...
	I0116 01:56:38.022406  566425 main.go:141] libmachine: (addons-874655) Reserved static IP address: 192.168.39.252
	I0116 01:56:38.022422  566425 main.go:141] libmachine: (addons-874655) Waiting for SSH to be available...
	I0116 01:56:38.025414  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.025775  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.025811  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.025924  566425 main.go:141] libmachine: (addons-874655) DBG | Using SSH client type: external
	I0116 01:56:38.025958  566425 main.go:141] libmachine: (addons-874655) DBG | Using SSH private key: /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa (-rw-------)
	I0116 01:56:38.025999  566425 main.go:141] libmachine: (addons-874655) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 01:56:38.026021  566425 main.go:141] libmachine: (addons-874655) DBG | About to run SSH command:
	I0116 01:56:38.026035  566425 main.go:141] libmachine: (addons-874655) DBG | exit 0
	I0116 01:56:38.119838  566425 main.go:141] libmachine: (addons-874655) DBG | SSH cmd err, output: <nil>: 
	I0116 01:56:38.120177  566425 main.go:141] libmachine: (addons-874655) KVM machine creation complete!
	I0116 01:56:38.120454  566425 main.go:141] libmachine: (addons-874655) Calling .GetConfigRaw
	I0116 01:56:38.121019  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:38.121217  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:38.121385  566425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 01:56:38.121399  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:56:38.122632  566425 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 01:56:38.122648  566425 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 01:56:38.122654  566425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 01:56:38.122661  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.124967  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.125306  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.125336  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.125456  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:38.125665  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.125849  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.125998  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:38.126182  566425 main.go:141] libmachine: Using SSH client type: native
	I0116 01:56:38.126589  566425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0116 01:56:38.126602  566425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 01:56:38.247109  566425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 01:56:38.247140  566425 main.go:141] libmachine: Detecting the provisioner...
	I0116 01:56:38.247154  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.250074  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.250519  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.250550  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.250729  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:38.250981  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.251237  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.251421  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:38.251675  566425 main.go:141] libmachine: Using SSH client type: native
	I0116 01:56:38.252132  566425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0116 01:56:38.252155  566425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 01:56:38.376963  566425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 01:56:38.377035  566425 main.go:141] libmachine: found compatible host: buildroot
	I0116 01:56:38.377043  566425 main.go:141] libmachine: Provisioning with buildroot...
	I0116 01:56:38.377052  566425 main.go:141] libmachine: (addons-874655) Calling .GetMachineName
	I0116 01:56:38.377350  566425 buildroot.go:166] provisioning hostname "addons-874655"
	I0116 01:56:38.377391  566425 main.go:141] libmachine: (addons-874655) Calling .GetMachineName
	I0116 01:56:38.377614  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.380335  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.380673  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.380702  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.380907  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:38.381134  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.381345  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.381483  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:38.381679  566425 main.go:141] libmachine: Using SSH client type: native
	I0116 01:56:38.381997  566425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0116 01:56:38.382011  566425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-874655 && echo "addons-874655" | sudo tee /etc/hostname
	I0116 01:56:38.516426  566425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-874655
	
	I0116 01:56:38.516469  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.519121  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.519498  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.519528  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.519749  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:38.519972  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.520118  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.520306  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:38.520497  566425 main.go:141] libmachine: Using SSH client type: native
	I0116 01:56:38.520975  566425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0116 01:56:38.520999  566425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-874655' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-874655/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-874655' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 01:56:38.651740  566425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 01:56:38.651776  566425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17967-558382/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-558382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-558382/.minikube}
	I0116 01:56:38.651816  566425 buildroot.go:174] setting up certificates
	I0116 01:56:38.651833  566425 provision.go:83] configureAuth start
	I0116 01:56:38.651849  566425 main.go:141] libmachine: (addons-874655) Calling .GetMachineName
	I0116 01:56:38.652186  566425 main.go:141] libmachine: (addons-874655) Calling .GetIP
	I0116 01:56:38.654656  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.655226  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.655251  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.655463  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.657848  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.658284  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.658332  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.658448  566425 provision.go:138] copyHostCerts
	I0116 01:56:38.658534  566425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-558382/.minikube/ca.pem (1082 bytes)
	I0116 01:56:38.658674  566425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-558382/.minikube/cert.pem (1123 bytes)
	I0116 01:56:38.658780  566425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-558382/.minikube/key.pem (1679 bytes)
	I0116 01:56:38.658850  566425 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-558382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca-key.pem org=jenkins.addons-874655 san=[192.168.39.252 192.168.39.252 localhost 127.0.0.1 minikube addons-874655]
	I0116 01:56:38.888579  566425 provision.go:172] copyRemoteCerts
	I0116 01:56:38.888658  566425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 01:56:38.888700  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:38.891316  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.891636  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:38.891675  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:38.891871  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:38.892124  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:38.892358  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:38.892565  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:56:38.981068  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 01:56:39.003260  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 01:56:39.024935  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 01:56:39.046289  566425 provision.go:86] duration metric: configureAuth took 394.440321ms
	I0116 01:56:39.046315  566425 buildroot.go:189] setting minikube options for container-runtime
	I0116 01:56:39.046526  566425 config.go:182] Loaded profile config "addons-874655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 01:56:39.046551  566425 main.go:141] libmachine: Checking connection to Docker...
	I0116 01:56:39.046570  566425 main.go:141] libmachine: (addons-874655) Calling .GetURL
	I0116 01:56:39.047884  566425 main.go:141] libmachine: (addons-874655) DBG | Using libvirt version 6000000
	I0116 01:56:39.050114  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.050493  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.050526  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.050684  566425 main.go:141] libmachine: Docker is up and running!
	I0116 01:56:39.050701  566425 main.go:141] libmachine: Reticulating splines...
	I0116 01:56:39.050708  566425 client.go:171] LocalClient.Create took 25.847490612s
	I0116 01:56:39.050730  566425 start.go:167] duration metric: libmachine.API.Create for "addons-874655" took 25.847570025s
	I0116 01:56:39.050740  566425 start.go:300] post-start starting for "addons-874655" (driver="kvm2")
	I0116 01:56:39.050750  566425 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 01:56:39.050768  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:39.051078  566425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 01:56:39.051107  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:39.053486  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.053771  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.053806  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.053960  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:39.054161  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:39.054340  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:39.054477  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:56:39.145317  566425 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 01:56:39.149561  566425 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 01:56:39.149598  566425 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-558382/.minikube/addons for local assets ...
	I0116 01:56:39.149709  566425 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-558382/.minikube/files for local assets ...
	I0116 01:56:39.149741  566425 start.go:303] post-start completed in 98.99328ms
	I0116 01:56:39.149792  566425 main.go:141] libmachine: (addons-874655) Calling .GetConfigRaw
	I0116 01:56:39.150430  566425 main.go:141] libmachine: (addons-874655) Calling .GetIP
	I0116 01:56:39.153330  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.153913  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.153948  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.154305  566425 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/config.json ...
	I0116 01:56:39.154493  566425 start.go:128] duration metric: createHost completed in 25.970921895s
	I0116 01:56:39.154519  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:39.156733  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.157048  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.157077  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.157225  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:39.157422  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:39.157627  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:39.157763  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:39.157885  566425 main.go:141] libmachine: Using SSH client type: native
	I0116 01:56:39.158265  566425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0116 01:56:39.158280  566425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 01:56:39.280478  566425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705370199.252354803
	
	I0116 01:56:39.280506  566425 fix.go:206] guest clock: 1705370199.252354803
	I0116 01:56:39.280516  566425 fix.go:219] Guest: 2024-01-16 01:56:39.252354803 +0000 UTC Remote: 2024-01-16 01:56:39.154506136 +0000 UTC m=+26.093253077 (delta=97.848667ms)
	I0116 01:56:39.280538  566425 fix.go:190] guest clock delta is within tolerance: 97.848667ms
	I0116 01:56:39.280542  566425 start.go:83] releasing machines lock for "addons-874655", held for 26.097070993s
	I0116 01:56:39.280583  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:39.280927  566425 main.go:141] libmachine: (addons-874655) Calling .GetIP
	I0116 01:56:39.283679  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.284072  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.284102  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.284261  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:39.284813  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:39.285051  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:56:39.285157  566425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 01:56:39.285237  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:39.285324  566425 ssh_runner.go:195] Run: cat /version.json
	I0116 01:56:39.285348  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:56:39.287881  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.288198  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.288233  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.288255  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.288385  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:39.288580  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:39.288787  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:39.288801  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:39.288852  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:39.288986  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:56:39.289051  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:56:39.289229  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:56:39.289354  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:56:39.289499  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:56:39.424589  566425 ssh_runner.go:195] Run: systemctl --version
	I0116 01:56:39.430181  566425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 01:56:39.435380  566425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 01:56:39.435476  566425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 01:56:39.452265  566425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 01:56:39.452292  566425 start.go:475] detecting cgroup driver to use...
	I0116 01:56:39.452379  566425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 01:56:39.485068  566425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 01:56:39.497927  566425 docker.go:217] disabling cri-docker service (if available) ...
	I0116 01:56:39.497998  566425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 01:56:39.510552  566425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 01:56:39.523710  566425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 01:56:39.635220  566425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 01:56:39.749607  566425 docker.go:233] disabling docker service ...
	I0116 01:56:39.749687  566425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 01:56:39.763065  566425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 01:56:39.774630  566425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 01:56:39.873196  566425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 01:56:39.970822  566425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 01:56:39.983958  566425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 01:56:40.000424  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 01:56:40.010125  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 01:56:40.020091  566425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 01:56:40.020178  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 01:56:40.030095  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 01:56:40.039929  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 01:56:40.049852  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 01:56:40.059811  566425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 01:56:40.069818  566425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 01:56:40.079868  566425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 01:56:40.088593  566425 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 01:56:40.088684  566425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 01:56:40.101468  566425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 01:56:40.111918  566425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:56:40.220912  566425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 01:56:40.256857  566425 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0116 01:56:40.256961  566425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 01:56:40.263363  566425 retry.go:31] will retry after 928.016403ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0116 01:56:41.191580  566425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0116 01:56:41.196776  566425 start.go:543] Will wait 60s for crictl version
	I0116 01:56:41.196857  566425 ssh_runner.go:195] Run: which crictl
	I0116 01:56:41.200296  566425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 01:56:41.235650  566425 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0116 01:56:41.235756  566425 ssh_runner.go:195] Run: containerd --version
	I0116 01:56:41.268325  566425 ssh_runner.go:195] Run: containerd --version
	I0116 01:56:41.296898  566425 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0116 01:56:41.298612  566425 main.go:141] libmachine: (addons-874655) Calling .GetIP
	I0116 01:56:41.301638  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:41.302007  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:56:41.302040  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:56:41.302337  566425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 01:56:41.306269  566425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 01:56:41.317406  566425 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 01:56:41.317477  566425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 01:56:41.355158  566425 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 01:56:41.355248  566425 ssh_runner.go:195] Run: which lz4
	I0116 01:56:41.359020  566425 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 01:56:41.363114  566425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 01:56:41.363148  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0116 01:56:43.233886  566425 containerd.go:548] Took 1.874896 seconds to copy over tarball
	I0116 01:56:43.233998  566425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 01:56:46.238836  566425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.004793714s)
	I0116 01:56:46.238878  566425 containerd.go:555] Took 3.004954 seconds to extract the tarball
	I0116 01:56:46.238889  566425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 01:56:46.279305  566425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:56:46.378521  566425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 01:56:46.403084  566425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 01:56:46.437257  566425 retry.go:31] will retry after 176.798811ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-16T01:56:46Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0116 01:56:46.614780  566425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 01:56:46.653939  566425 containerd.go:612] all images are preloaded for containerd runtime.
	I0116 01:56:46.653970  566425 cache_images.go:84] Images are preloaded, skipping loading
	I0116 01:56:46.654030  566425 ssh_runner.go:195] Run: sudo crictl info
	I0116 01:56:46.690997  566425 cni.go:84] Creating CNI manager for ""
	I0116 01:56:46.691027  566425 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:56:46.691060  566425 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 01:56:46.691087  566425 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-874655 NodeName:addons-874655 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 01:56:46.691295  566425 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-874655"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 01:56:46.691399  566425 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-874655 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-874655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 01:56:46.691464  566425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 01:56:46.700105  566425 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 01:56:46.700238  566425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 01:56:46.708664  566425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0116 01:56:46.724238  566425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 01:56:46.739299  566425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0116 01:56:46.754963  566425 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0116 01:56:46.758501  566425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 01:56:46.770289  566425 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655 for IP: 192.168.39.252
	I0116 01:56:46.770326  566425 certs.go:190] acquiring lock for shared ca certs: {Name:mkd5f6d18d877a143c5bf0a00887ef68747376af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:46.770488  566425 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17967-558382/.minikube/ca.key
	I0116 01:56:46.877165  566425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt ...
	I0116 01:56:46.877198  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt: {Name:mkf368e57f575898f8969205014eaebfd1b74c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:46.877406  566425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-558382/.minikube/ca.key ...
	I0116 01:56:46.877422  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/ca.key: {Name:mk4af9679e009142a99d2ef93debf27a983ebb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:46.877522  566425 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.key
	I0116 01:56:47.081990  566425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.crt ...
	I0116 01:56:47.082033  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.crt: {Name:mkb6c2617688013e80fc30c044f1fe495eb79eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.082263  566425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.key ...
	I0116 01:56:47.082279  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.key: {Name:mkf29702363e51cd1e1cbff1749a92aa3cd89a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.082410  566425 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.key
	I0116 01:56:47.082430  566425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt with IP's: []
	I0116 01:56:47.186684  566425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt ...
	I0116 01:56:47.186721  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: {Name:mk126a0d706d461fdbd9ad57dd2b2b4c7fd784b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.186938  566425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.key ...
	I0116 01:56:47.186953  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.key: {Name:mk83561499bd776933d4223d95b1ad93b95d9168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.187118  566425 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key.ba3365be
	I0116 01:56:47.187147  566425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt.ba3365be with IP's: [192.168.39.252 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 01:56:47.334282  566425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt.ba3365be ...
	I0116 01:56:47.334318  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt.ba3365be: {Name:mk4fa90cabc495caeb33959a066eabc8d487a1c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.334527  566425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key.ba3365be ...
	I0116 01:56:47.334548  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key.ba3365be: {Name:mk8990f9583797e0de32fbbf9640bb76bb4f22c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.334640  566425 certs.go:337] copying /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt.ba3365be -> /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt
	I0116 01:56:47.334733  566425 certs.go:341] copying /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key.ba3365be -> /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key
	I0116 01:56:47.334795  566425 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.key
	I0116 01:56:47.334823  566425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.crt with IP's: []
	I0116 01:56:47.392395  566425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.crt ...
	I0116 01:56:47.392430  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.crt: {Name:mk9a8dc3bafe2c0901a019d4307d955c4ab54e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.392633  566425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.key ...
	I0116 01:56:47.392651  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.key: {Name:mkf8625bccdefa77422a750e6e5ecd0839b88377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:47.392857  566425 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 01:56:47.392901  566425 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/home/jenkins/minikube-integration/17967-558382/.minikube/certs/ca.pem (1082 bytes)
	I0116 01:56:47.392940  566425 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/home/jenkins/minikube-integration/17967-558382/.minikube/certs/cert.pem (1123 bytes)
	I0116 01:56:47.392975  566425 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-558382/.minikube/certs/home/jenkins/minikube-integration/17967-558382/.minikube/certs/key.pem (1679 bytes)
	I0116 01:56:47.393804  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 01:56:47.417265  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 01:56:47.439630  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 01:56:47.464131  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 01:56:47.486564  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 01:56:47.508450  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 01:56:47.529718  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 01:56:47.551244  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 01:56:47.573224  566425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 01:56:47.594203  566425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 01:56:47.609800  566425 ssh_runner.go:195] Run: openssl version
	I0116 01:56:47.615085  566425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 01:56:47.626208  566425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:56:47.630933  566425 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:56:47.631000  566425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:56:47.636272  566425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 01:56:47.646503  566425 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 01:56:47.650395  566425 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 01:56:47.650454  566425 kubeadm.go:404] StartCluster: {Name:addons-874655 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-874655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:56:47.650546  566425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0116 01:56:47.650610  566425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 01:56:47.689812  566425 cri.go:89] found id: ""
	I0116 01:56:47.689893  566425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 01:56:47.699258  566425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 01:56:47.709792  566425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 01:56:47.720272  566425 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 01:56:47.720331  566425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 01:56:47.782001  566425 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 01:56:47.782099  566425 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 01:56:47.920021  566425 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 01:56:47.920190  566425 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 01:56:47.920364  566425 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 01:56:48.150541  566425 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 01:56:48.152556  566425 out.go:204]   - Generating certificates and keys ...
	I0116 01:56:48.152666  566425 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 01:56:48.152782  566425 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 01:56:48.251214  566425 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 01:56:48.344151  566425 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 01:56:48.432248  566425 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 01:56:48.548742  566425 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 01:56:49.021530  566425 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 01:56:49.021798  566425 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-874655 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I0116 01:56:49.193843  566425 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 01:56:49.194064  566425 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-874655 localhost] and IPs [192.168.39.252 127.0.0.1 ::1]
	I0116 01:56:49.251833  566425 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 01:56:49.371743  566425 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 01:56:49.847590  566425 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 01:56:49.847934  566425 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 01:56:50.060480  566425 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 01:56:50.185422  566425 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 01:56:50.289208  566425 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 01:56:50.531533  566425 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 01:56:50.534467  566425 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 01:56:50.537012  566425 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 01:56:50.539520  566425 out.go:204]   - Booting up control plane ...
	I0116 01:56:50.539666  566425 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 01:56:50.539843  566425 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 01:56:50.539963  566425 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 01:56:50.558682  566425 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 01:56:50.559205  566425 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 01:56:50.559256  566425 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 01:56:50.670376  566425 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 01:56:58.168117  566425 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502819 seconds
	I0116 01:56:58.168287  566425 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 01:56:58.188728  566425 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 01:56:58.717399  566425 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 01:56:58.717629  566425 kubeadm.go:322] [mark-control-plane] Marking the node addons-874655 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 01:56:59.235812  566425 kubeadm.go:322] [bootstrap-token] Using token: yqji53.y3kda63p4qc7fc5a
	I0116 01:56:59.237593  566425 out.go:204]   - Configuring RBAC rules ...
	I0116 01:56:59.237741  566425 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 01:56:59.250730  566425 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 01:56:59.266336  566425 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 01:56:59.274670  566425 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 01:56:59.286743  566425 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 01:56:59.294397  566425 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 01:56:59.325638  566425 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 01:56:59.574057  566425 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 01:56:59.661548  566425 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 01:56:59.662509  566425 kubeadm.go:322] 
	I0116 01:56:59.662600  566425 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 01:56:59.662611  566425 kubeadm.go:322] 
	I0116 01:56:59.662721  566425 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 01:56:59.662742  566425 kubeadm.go:322] 
	I0116 01:56:59.662785  566425 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 01:56:59.662878  566425 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 01:56:59.662959  566425 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 01:56:59.662977  566425 kubeadm.go:322] 
	I0116 01:56:59.663061  566425 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 01:56:59.663069  566425 kubeadm.go:322] 
	I0116 01:56:59.663169  566425 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 01:56:59.663182  566425 kubeadm.go:322] 
	I0116 01:56:59.663258  566425 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 01:56:59.663390  566425 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 01:56:59.663504  566425 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 01:56:59.663518  566425 kubeadm.go:322] 
	I0116 01:56:59.663617  566425 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 01:56:59.663713  566425 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 01:56:59.663726  566425 kubeadm.go:322] 
	I0116 01:56:59.663849  566425 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yqji53.y3kda63p4qc7fc5a \
	I0116 01:56:59.663996  566425 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6deb26cf1c8e93f46cffe84eac2a224fc772129145a893fb804d73d35459ad94 \
	I0116 01:56:59.664029  566425 kubeadm.go:322] 	--control-plane 
	I0116 01:56:59.664038  566425 kubeadm.go:322] 
	I0116 01:56:59.664166  566425 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 01:56:59.664176  566425 kubeadm.go:322] 
	I0116 01:56:59.664282  566425 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yqji53.y3kda63p4qc7fc5a \
	I0116 01:56:59.664452  566425 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6deb26cf1c8e93f46cffe84eac2a224fc772129145a893fb804d73d35459ad94 
	I0116 01:56:59.664540  566425 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 01:56:59.664597  566425 cni.go:84] Creating CNI manager for ""
	I0116 01:56:59.664621  566425 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:56:59.666509  566425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 01:56:59.668243  566425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 01:56:59.686243  566425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 01:56:59.720425  566425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 01:56:59.720546  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:56:59.720558  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-874655 minikube.k8s.io/updated_at=2024_01_16T01_56_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:00.108772  566425 ops.go:34] apiserver oom_adj: -16
	I0116 01:57:00.108937  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:00.609645  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:01.109445  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:01.609479  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:02.109901  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:02.609062  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:03.109585  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:03.609080  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:04.110018  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:04.609961  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:05.109635  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:05.609042  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:06.109046  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:06.610067  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:07.108983  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:07.609382  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:08.109788  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:08.609649  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:09.108972  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:09.609081  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:10.109964  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:10.609796  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:11.109236  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:11.609980  566425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:57:11.782613  566425 kubeadm.go:1088] duration metric: took 12.06214857s to wait for elevateKubeSystemPrivileges.
	I0116 01:57:11.782656  566425 kubeadm.go:406] StartCluster complete in 24.132207679s
	I0116 01:57:11.782685  566425 settings.go:142] acquiring lock: {Name:mka056fd32f5453b3627898f8ccef9df55e46f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:57:11.782854  566425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 01:57:11.783299  566425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/kubeconfig: {Name:mk589f590988c4dd25f7ecb91c9a410006fe00fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:57:11.783501  566425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 01:57:11.783671  566425 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 01:57:11.783773  566425 config.go:182] Loaded profile config "addons-874655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 01:57:11.783813  566425 addons.go:69] Setting yakd=true in profile "addons-874655"
	I0116 01:57:11.783828  566425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-874655"
	I0116 01:57:11.783843  566425 addons.go:69] Setting default-storageclass=true in profile "addons-874655"
	I0116 01:57:11.783863  566425 addons.go:69] Setting registry=true in profile "addons-874655"
	I0116 01:57:11.783872  566425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-874655"
	I0116 01:57:11.783882  566425 addons.go:69] Setting volumesnapshots=true in profile "addons-874655"
	I0116 01:57:11.783878  566425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-874655"
	I0116 01:57:11.783892  566425 addons.go:234] Setting addon volumesnapshots=true in "addons-874655"
	I0116 01:57:11.783905  566425 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-874655"
	I0116 01:57:11.783903  566425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-874655"
	I0116 01:57:11.783920  566425 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-874655"
	I0116 01:57:11.783927  566425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-874655"
	I0116 01:57:11.783966  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.783967  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.783977  566425 addons.go:69] Setting ingress=true in profile "addons-874655"
	I0116 01:57:11.783988  566425 addons.go:234] Setting addon ingress=true in "addons-874655"
	I0116 01:57:11.784026  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.784282  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.784315  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.784365  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.784366  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.783850  566425 addons.go:69] Setting helm-tiller=true in profile "addons-874655"
	I0116 01:57:11.784381  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.784387  566425 addons.go:234] Setting addon helm-tiller=true in "addons-874655"
	I0116 01:57:11.784399  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.784404  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.784409  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.784419  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.784468  566425 addons.go:69] Setting ingress-dns=true in profile "addons-874655"
	I0116 01:57:11.784480  566425 addons.go:234] Setting addon ingress-dns=true in "addons-874655"
	I0116 01:57:11.784525  566425 addons.go:69] Setting inspektor-gadget=true in profile "addons-874655"
	I0116 01:57:11.784533  566425 addons.go:234] Setting addon inspektor-gadget=true in "addons-874655"
	I0116 01:57:11.784536  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.784543  566425 addons.go:69] Setting metrics-server=true in profile "addons-874655"
	I0116 01:57:11.784552  566425 addons.go:234] Setting addon metrics-server=true in "addons-874655"
	I0116 01:57:11.783841  566425 addons.go:234] Setting addon yakd=true in "addons-874655"
	I0116 01:57:11.784674  566425 addons.go:69] Setting cloud-spanner=true in profile "addons-874655"
	I0116 01:57:11.784370  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.784710  566425 addons.go:69] Setting storage-provisioner=true in profile "addons-874655"
	I0116 01:57:11.784721  566425 addons.go:234] Setting addon storage-provisioner=true in "addons-874655"
	I0116 01:57:11.783967  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.783875  566425 addons.go:234] Setting addon registry=true in "addons-874655"
	I0116 01:57:11.784740  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.783808  566425 addons.go:69] Setting gcp-auth=true in profile "addons-874655"
	I0116 01:57:11.784698  566425 addons.go:234] Setting addon cloud-spanner=true in "addons-874655"
	I0116 01:57:11.784855  566425 mustload.go:65] Loading cluster: addons-874655
	I0116 01:57:11.784910  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.784908  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.784948  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.784968  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.785020  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.785070  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785104  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785175  566425 config.go:182] Loaded profile config "addons-874655": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 01:57:11.785184  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.785247  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785280  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.784978  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785395  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785460  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785394  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.785627  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785680  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785742  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785754  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785765  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785776  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785790  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785797  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785829  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785870  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.785905  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.785943  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.802604  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0116 01:57:11.803464  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.804200  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I0116 01:57:11.804314  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.804337  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.804357  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0116 01:57:11.804478  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0116 01:57:11.804652  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.804763  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.804822  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.804976  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.805328  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.805362  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.805390  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.805440  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.805511  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.805529  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.805654  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.805671  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.805912  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.805936  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.806027  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.816158  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.816193  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0116 01:57:11.816436  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.816486  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.816551  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.816584  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.817630  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.818494  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.818523  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.820356  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.823856  566425 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-874655"
	I0116 01:57:11.823917  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.824360  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.824399  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.828964  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.829011  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.837711  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0116 01:57:11.838635  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.839360  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.839385  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.840922  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.841585  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.841632  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.850107  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
	I0116 01:57:11.850682  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.851496  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.851522  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.852190  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.852469  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.853403  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0116 01:57:11.854279  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.854973  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.854998  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.855059  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.855452  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.855492  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.855704  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0116 01:57:11.856448  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.856993  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.857091  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.857494  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.857645  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.858271  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35515
	I0116 01:57:11.858421  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I0116 01:57:11.858464  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.859060  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.859535  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.859703  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.859781  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.860263  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.860283  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.860727  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.860885  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.860924  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.861023  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.861045  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.863320  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 01:57:11.861588  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.862157  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46865
	I0116 01:57:11.862652  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.866884  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 01:57:11.865861  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.866566  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.869780  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 01:57:11.868528  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.868547  566425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 01:57:11.870199  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 01:57:11.870922  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.871240  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.873155  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 01:57:11.872060  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.872698  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.876744  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 01:57:11.874909  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0116 01:57:11.875032  566425 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 01:57:11.875133  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.875599  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.878352  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.880354  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 01:57:11.878554  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 01:57:11.879230  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.879908  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.880934  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0116 01:57:11.880969  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0116 01:57:11.881460  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I0116 01:57:11.884359  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 01:57:11.882451  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.882652  566425 addons.go:234] Setting addon default-storageclass=true in "addons-874655"
	I0116 01:57:11.883401  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.883420  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.883441  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0116 01:57:11.883456  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.883470  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33849
	I0116 01:57:11.883683  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0116 01:57:11.883978  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.884044  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.885047  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0116 01:57:11.885909  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.887839  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 01:57:11.886108  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.886207  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:11.887547  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.887908  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.889965  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 01:57:11.889993  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 01:57:11.890037  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.888287  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.887727  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.887754  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.890187  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.888307  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.887677  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.890271  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.888606  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.888823  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.890353  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.890663  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.890686  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.889114  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.889456  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.890826  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0116 01:57:11.890920  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.891381  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.891571  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.891588  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.891724  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.892056  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.892568  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.892592  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.892672  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.892696  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.893020  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.893074  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.893093  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.893116  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.893155  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.893188  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.893207  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.893238  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.893257  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.893278  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.893319  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.893801  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.893875  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.893879  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.893947  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.894348  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.894416  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.894793  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.894829  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.894932  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.894985  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.895379  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.895401  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.895599  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.895765  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.896023  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.896077  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.896409  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.898107  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.898180  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.898235  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.900379  566425 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 01:57:11.899232  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.899327  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.899408  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.902320  566425 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 01:57:11.902656  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.903873  566425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 01:57:11.905470  566425 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 01:57:11.905494  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 01:57:11.905551  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.903965  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 01:57:11.905635  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.903980  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.904225  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.907670  566425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 01:57:11.909463  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 01:57:11.909489  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 01:57:11.909529  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.909853  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.910422  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.910458  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.910715  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.910952  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.911321  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.911561  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.912213  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.912539  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.912563  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.912972  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.913234  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.913493  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.913753  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.915625  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.916132  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.916163  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.916393  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.916628  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.916832  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.917026  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.917688  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I0116 01:57:11.918096  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.918620  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
	I0116 01:57:11.918885  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.918901  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.919533  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0116 01:57:11.919755  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.920449  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.920475  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.920564  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.920619  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.920716  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0116 01:57:11.920964  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.921771  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.921792  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.921834  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.922225  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0116 01:57:11.922570  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.922603  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.922620  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.922676  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.922935  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.923004  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.923053  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.923085  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.924940  566425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 01:57:11.923523  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.923688  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.925490  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.925527  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.926676  566425 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 01:57:11.926697  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 01:57:11.926717  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.928883  566425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 01:57:11.927856  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.929232  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.930048  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I0116 01:57:11.930777  566425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 01:57:11.930870  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.930730  566425 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 01:57:11.930382  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.931310  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.931481  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.932311  566425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:57:11.932332  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 01:57:11.932422  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.932828  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.933760  566425 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 01:57:11.933830  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.933858  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.934046  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.934085  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.934408  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.935662  566425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 01:57:11.937490  566425 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 01:57:11.939230  566425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:57:11.935878  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.936487  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.937060  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0116 01:57:11.937509  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 01:57:11.939572  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.941181  566425 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 01:57:11.941206  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 01:57:11.941228  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.941425  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.941458  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.942343  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.942349  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I0116 01:57:11.942344  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.942439  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.942462  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.942495  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.942750  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.942797  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.942898  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.944525  566425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 01:57:11.943406  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.943896  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.944095  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.945932  566425 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 01:57:11.945949  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 01:57:11.945971  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.946207  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.946302  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.946521  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.946653  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.946717  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.947011  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.948597  566425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 01:57:11.950141  566425 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 01:57:11.950158  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 01:57:11.950177  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.947757  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.950225  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.950224  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.947785  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.947823  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0116 01:57:11.950282  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.950314  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.947832  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.950399  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.947999  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.948252  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.948980  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.951006  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.951054  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.951118  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.951117  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.951159  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.951179  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.953030  566425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 01:57:11.951490  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.951507  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.951515  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.951836  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.953156  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.953780  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.954559  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.954581  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.954657  566425 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 01:57:11.954672  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 01:57:11.954707  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.954342  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.954949  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.954947  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.956622  566425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 01:57:11.955345  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.955375  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.955384  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.956033  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.958105  566425 out.go:177]   - Using image docker.io/busybox:stable
	I0116 01:57:11.958120  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.959621  566425 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 01:57:11.958136  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.958352  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.958382  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.958923  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.959674  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 01:57:11.959689  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.959707  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.959713  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.960354  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.960387  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.960363  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.960608  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.960783  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.961979  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:11.962037  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:11.963174  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.963603  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.963626  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.963851  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.964043  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.964202  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.964326  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:11.979490  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0116 01:57:11.980211  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:11.980802  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:11.980832  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:11.981233  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:11.981498  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:11.983444  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:11.983836  566425 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 01:57:11.983856  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 01:57:11.983876  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:11.987616  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.988118  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:11.988140  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:11.988333  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:11.988562  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:11.988690  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:11.988858  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	W0116 01:57:11.996533  566425 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51404->192.168.39.252:22: read: connection reset by peer
	I0116 01:57:11.996578  566425 retry.go:31] will retry after 129.526621ms: ssh: handshake failed: read tcp 192.168.39.1:51404->192.168.39.252:22: read: connection reset by peer
	I0116 01:57:12.289201  566425 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-874655" context rescaled to 1 replicas
	I0116 01:57:12.289266  566425 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0116 01:57:12.291474  566425 out.go:177] * Verifying Kubernetes components...
	I0116 01:57:12.293399  566425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 01:57:12.363707  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 01:57:12.363742  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 01:57:12.448511  566425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 01:57:12.462114  566425 node_ready.go:35] waiting up to 6m0s for node "addons-874655" to be "Ready" ...
	I0116 01:57:12.466945  566425 node_ready.go:49] node "addons-874655" has status "Ready":"True"
	I0116 01:57:12.466973  566425 node_ready.go:38] duration metric: took 4.812851ms waiting for node "addons-874655" to be "Ready" ...
	I0116 01:57:12.466984  566425 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 01:57:12.477659  566425 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-76kgl" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:12.565344  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 01:57:12.599961  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 01:57:12.611009  566425 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 01:57:12.611065  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 01:57:12.621285  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 01:57:12.695731  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 01:57:12.695759  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 01:57:12.734004  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 01:57:12.750306  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 01:57:12.753007  566425 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 01:57:12.753035  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 01:57:12.756779  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 01:57:12.768990  566425 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 01:57:12.769023  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 01:57:12.812891  566425 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 01:57:12.812927  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 01:57:12.830447  566425 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 01:57:12.830478  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 01:57:12.838371  566425 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 01:57:12.838404  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 01:57:12.859863  566425 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 01:57:12.859895  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 01:57:13.080664  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 01:57:13.098254  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 01:57:13.098299  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 01:57:13.167083  566425 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 01:57:13.167113  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 01:57:13.183339  566425 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 01:57:13.183370  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 01:57:13.202044  566425 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 01:57:13.202083  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 01:57:13.253404  566425 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 01:57:13.253443  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 01:57:13.405368  566425 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 01:57:13.405406  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 01:57:13.410432  566425 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 01:57:13.410458  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 01:57:13.456631  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 01:57:13.456677  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 01:57:13.540236  566425 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 01:57:13.540278  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 01:57:13.581962  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 01:57:13.614597  566425 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 01:57:13.614633  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 01:57:13.639301  566425 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 01:57:13.639328  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 01:57:13.676470  566425 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 01:57:13.676498  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 01:57:13.681559  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 01:57:13.690486  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 01:57:13.690517  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 01:57:13.693249  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 01:57:13.780001  566425 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 01:57:13.780038  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 01:57:13.815240  566425 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 01:57:13.815267  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 01:57:13.830368  566425 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 01:57:13.830407  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 01:57:13.885282  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 01:57:13.885319  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 01:57:14.011668  566425 pod_ready.go:97] pod "coredns-5dd5756b68-76kgl" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.252 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-01-16 01:57:11 +0000 UTC InitContainerStatuses:[] ContainerS
tatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003f94b0a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 01:57:14.011715  566425 pod_ready.go:81] duration metric: took 1.534020057s waiting for pod "coredns-5dd5756b68-76kgl" in "kube-system" namespace to be "Ready" ...
	E0116 01:57:14.011734  566425 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-76kgl" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 01:57:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.252 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-01-16 01:57:11 +0000 UTC InitCo
ntainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc003f94b0a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 01:57:14.011746  566425 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jxfvn" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.064357  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 01:57:14.073398  566425 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 01:57:14.073427  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 01:57:14.377355  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 01:57:14.377393  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 01:57:14.417143  566425 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 01:57:14.417174  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 01:57:14.452879  566425 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:57:14.452905  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 01:57:14.479662  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 01:57:14.521426  566425 pod_ready.go:92] pod "coredns-5dd5756b68-jxfvn" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:14.521472  566425 pod_ready.go:81] duration metric: took 509.714636ms waiting for pod "coredns-5dd5756b68-jxfvn" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.521490  566425 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.535146  566425 pod_ready.go:92] pod "etcd-addons-874655" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:14.535172  566425 pod_ready.go:81] duration metric: took 13.67339ms waiting for pod "etcd-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.535185  566425 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.543492  566425 pod_ready.go:92] pod "kube-apiserver-addons-874655" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:14.543521  566425 pod_ready.go:81] duration metric: took 8.327056ms waiting for pod "kube-apiserver-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.543534  566425 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.553332  566425 pod_ready.go:92] pod "kube-controller-manager-addons-874655" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:14.553356  566425 pod_ready.go:81] duration metric: took 9.814556ms waiting for pod "kube-controller-manager-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.553367  566425 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xv7c" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.670883  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:57:14.687829  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 01:57:14.687863  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 01:57:14.865931  566425 pod_ready.go:92] pod "kube-proxy-8xv7c" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:14.865959  566425 pod_ready.go:81] duration metric: took 312.583419ms waiting for pod "kube-proxy-8xv7c" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:14.865972  566425 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:15.213411  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 01:57:15.213441  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 01:57:15.265730  566425 pod_ready.go:92] pod "kube-scheduler-addons-874655" in "kube-system" namespace has status "Ready":"True"
	I0116 01:57:15.265769  566425 pod_ready.go:81] duration metric: took 399.786998ms waiting for pod "kube-scheduler-addons-874655" in "kube-system" namespace to be "Ready" ...
	I0116 01:57:15.265782  566425 pod_ready.go:38] duration metric: took 2.798785865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 01:57:15.265808  566425 api_server.go:52] waiting for apiserver process to appear ...
	I0116 01:57:15.265915  566425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 01:57:15.379837  566425 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 01:57:15.379879  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 01:57:15.568383  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 01:57:16.877170  566425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.428589525s)
	I0116 01:57:16.877215  566425 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 01:57:18.512973  566425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 01:57:18.513022  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:18.517186  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:18.517698  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:18.517736  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:18.517987  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:18.518290  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:18.518484  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:18.518716  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:19.498969  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.898966038s)
	I0116 01:57:19.499031  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:19.499043  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:19.499455  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:19.499525  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:19.499540  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:19.499560  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:19.499581  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:19.499872  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:19.499974  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:19.499950  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:19.501235  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.935833992s)
	I0116 01:57:19.501278  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:19.501289  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:19.501655  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:19.501681  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:19.501692  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:19.501702  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:19.501661  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:19.502026  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:19.502075  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:19.502117  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:19.603749  566425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 01:57:19.888737  566425 addons.go:234] Setting addon gcp-auth=true in "addons-874655"
	I0116 01:57:19.888809  566425 host.go:66] Checking if "addons-874655" exists ...
	I0116 01:57:19.889314  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:19.889355  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:19.906066  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0116 01:57:19.906598  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:19.907213  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:19.907243  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:19.907635  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:19.908195  566425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 01:57:19.908230  566425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 01:57:19.924286  566425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0116 01:57:19.924868  566425 main.go:141] libmachine: () Calling .GetVersion
	I0116 01:57:19.925470  566425 main.go:141] libmachine: Using API Version  1
	I0116 01:57:19.925495  566425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 01:57:19.925843  566425 main.go:141] libmachine: () Calling .GetMachineName
	I0116 01:57:19.926086  566425 main.go:141] libmachine: (addons-874655) Calling .GetState
	I0116 01:57:19.927594  566425 main.go:141] libmachine: (addons-874655) Calling .DriverName
	I0116 01:57:19.927866  566425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 01:57:19.927898  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHHostname
	I0116 01:57:19.931132  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:19.931632  566425 main.go:141] libmachine: (addons-874655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:f8:62", ip: ""} in network mk-addons-874655: {Iface:virbr1 ExpiryTime:2024-01-16 02:56:29 +0000 UTC Type:0 Mac:52:54:00:1d:f8:62 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:addons-874655 Clientid:01:52:54:00:1d:f8:62}
	I0116 01:57:19.931660  566425 main.go:141] libmachine: (addons-874655) DBG | domain addons-874655 has defined IP address 192.168.39.252 and MAC address 52:54:00:1d:f8:62 in network mk-addons-874655
	I0116 01:57:19.931990  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHPort
	I0116 01:57:19.932228  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHKeyPath
	I0116 01:57:19.932420  566425 main.go:141] libmachine: (addons-874655) Calling .GetSSHUsername
	I0116 01:57:19.932705  566425 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/addons-874655/id_rsa Username:docker}
	I0116 01:57:23.857975  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.123902638s)
	I0116 01:57:23.858023  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.107671293s)
	I0116 01:57:23.858045  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858064  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858073  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858075  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.101268124s)
	I0116 01:57:23.858089  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858101  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858111  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858185  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.777484957s)
	I0116 01:57:23.858218  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858229  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858262  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.276251162s)
	I0116 01:57:23.858284  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858295  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858294  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.176704535s)
	I0116 01:57:23.858322  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858335  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858373  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.165092775s)
	I0116 01:57:23.858392  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858402  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858423  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.794029728s)
	I0116 01:57:23.858442  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858453  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858503  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.378809607s)
	I0116 01:57:23.858519  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858528  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858646  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.187730299s)
	I0116 01:57:23.858658  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	W0116 01:57:23.858681  566425 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 01:57:23.858710  566425 retry.go:31] will retry after 352.984506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 01:57:23.858712  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.858724  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.858736  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858742  566425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.592808572s)
	I0116 01:57:23.858760  566425 api_server.go:72] duration metric: took 11.569458671s to wait for apiserver process to appear ...
	I0116 01:57:23.858776  566425 api_server.go:88] waiting for apiserver healthz status ...
	I0116 01:57:23.858801  566425 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0116 01:57:23.858825  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.858852  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.858861  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.858875  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.858877  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.858893  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.858893  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.858898  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.858909  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858918  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858922  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.858928  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.858930  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.858936  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.858940  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858945  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858948  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858953  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858884  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.858973  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.858982  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.859016  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.859024  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.859033  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.859040  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.858744  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.859157  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.859168  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.859439  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.859463  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.859488  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.859497  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.859509  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.859519  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.859582  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.859591  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.859600  566425 addons.go:470] Verifying addon registry=true in "addons-874655"
	I0116 01:57:23.861296  566425 out.go:177] * Verifying registry addon...
	I0116 01:57:23.863059  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.241731495s)
	I0116 01:57:23.863094  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.863107  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.863727  566425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 01:57:23.860994  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861028  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.863966  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.861380  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.864037  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.864052  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.864062  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.861047  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861082  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861101  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.864130  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.864141  566425 addons.go:470] Verifying addon metrics-server=true in "addons-874655"
	I0116 01:57:23.861067  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.864282  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.865636  566425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-874655 service yakd-dashboard -n yakd-dashboard
	
	I0116 01:57:23.864555  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861257  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.861682  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861704  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.861732  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.864587  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.864620  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.864638  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.861198  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.861225  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.867655  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.867667  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.867667  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.867681  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.867693  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.867696  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.867704  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.867706  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.867655  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.868470  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.868482  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.868541  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.868504  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.868552  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.868566  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.868643  566425 addons.go:470] Verifying addon ingress=true in "addons-874655"
	I0116 01:57:23.870379  566425 out.go:177] * Verifying ingress addon...
	I0116 01:57:23.872695  566425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 01:57:23.879662  566425 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0116 01:57:23.882004  566425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 01:57:23.882028  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:23.882445  566425 api_server.go:141] control plane version: v1.28.4
	I0116 01:57:23.882472  566425 api_server.go:131] duration metric: took 23.68399ms to wait for apiserver health ...
	I0116 01:57:23.882483  566425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 01:57:23.885394  566425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 01:57:23.885417  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:23.905020  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.905047  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.905424  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.905429  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:23.905442  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:23.907149  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:23.907177  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:23.907448  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:23.907473  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0116 01:57:23.907597  566425 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 01:57:23.913946  566425 system_pods.go:59] 15 kube-system pods found
	I0116 01:57:23.913976  566425 system_pods.go:61] "coredns-5dd5756b68-jxfvn" [020d87ec-4c0f-47ff-9799-d0091fd453d8] Running
	I0116 01:57:23.913980  566425 system_pods.go:61] "etcd-addons-874655" [d39f45b4-8347-4ce3-87dc-35f966fb833d] Running
	I0116 01:57:23.913985  566425 system_pods.go:61] "kube-apiserver-addons-874655" [bcdd1fb8-66eb-40d2-8e57-898a81aa49ac] Running
	I0116 01:57:23.913989  566425 system_pods.go:61] "kube-controller-manager-addons-874655" [3fc5353b-24b0-48c6-83df-5e09023f746c] Running
	I0116 01:57:23.913998  566425 system_pods.go:61] "kube-ingress-dns-minikube" [414b75a3-e9f0-4bc2-bc78-ccdb57d66d46] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 01:57:23.914004  566425 system_pods.go:61] "kube-proxy-8xv7c" [9ae83988-b441-4370-a27c-ff63c720e06f] Running
	I0116 01:57:23.914017  566425 system_pods.go:61] "kube-scheduler-addons-874655" [1312294e-ad42-4f06-8a58-733dfa186b81] Running
	I0116 01:57:23.914033  566425 system_pods.go:61] "metrics-server-7c66d45ddc-m4wqp" [be51a3a4-8a00-421c-88df-222a2ebded47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 01:57:23.914044  566425 system_pods.go:61] "nvidia-device-plugin-daemonset-7xfml" [a88dbfda-64b4-4b19-b555-d3c1125242f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0116 01:57:23.914058  566425 system_pods.go:61] "registry-78bhv" [1907fb2e-d297-4c24-82d4-d7d8736b29cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 01:57:23.914074  566425 system_pods.go:61] "registry-proxy-x6nzc" [64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 01:57:23.914083  566425 system_pods.go:61] "snapshot-controller-58dbcc7b99-5nprc" [12f6ab6e-50ff-4b3c-bfab-d6fb94e285ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 01:57:23.914095  566425 system_pods.go:61] "snapshot-controller-58dbcc7b99-p2tvv" [fd372217-c45c-4ddc-9410-bd643f29a822] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 01:57:23.914107  566425 system_pods.go:61] "storage-provisioner" [100fa3b9-4f94-4687-a675-a5e3c9f03dbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 01:57:23.914120  566425 system_pods.go:61] "tiller-deploy-7b677967b9-9f2w9" [068035ed-e81b-4a9b-921a-c3a8b21cdf49] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 01:57:23.914132  566425 system_pods.go:74] duration metric: took 31.642395ms to wait for pod list to return data ...
	I0116 01:57:23.914150  566425 default_sa.go:34] waiting for default service account to be created ...
	I0116 01:57:23.922518  566425 default_sa.go:45] found service account: "default"
	I0116 01:57:23.922560  566425 default_sa.go:55] duration metric: took 8.398972ms for default service account to be created ...
	I0116 01:57:23.922580  566425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 01:57:23.931723  566425 system_pods.go:86] 15 kube-system pods found
	I0116 01:57:23.931757  566425 system_pods.go:89] "coredns-5dd5756b68-jxfvn" [020d87ec-4c0f-47ff-9799-d0091fd453d8] Running
	I0116 01:57:23.931763  566425 system_pods.go:89] "etcd-addons-874655" [d39f45b4-8347-4ce3-87dc-35f966fb833d] Running
	I0116 01:57:23.931768  566425 system_pods.go:89] "kube-apiserver-addons-874655" [bcdd1fb8-66eb-40d2-8e57-898a81aa49ac] Running
	I0116 01:57:23.931772  566425 system_pods.go:89] "kube-controller-manager-addons-874655" [3fc5353b-24b0-48c6-83df-5e09023f746c] Running
	I0116 01:57:23.931780  566425 system_pods.go:89] "kube-ingress-dns-minikube" [414b75a3-e9f0-4bc2-bc78-ccdb57d66d46] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 01:57:23.931785  566425 system_pods.go:89] "kube-proxy-8xv7c" [9ae83988-b441-4370-a27c-ff63c720e06f] Running
	I0116 01:57:23.931811  566425 system_pods.go:89] "kube-scheduler-addons-874655" [1312294e-ad42-4f06-8a58-733dfa186b81] Running
	I0116 01:57:23.931822  566425 system_pods.go:89] "metrics-server-7c66d45ddc-m4wqp" [be51a3a4-8a00-421c-88df-222a2ebded47] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 01:57:23.931832  566425 system_pods.go:89] "nvidia-device-plugin-daemonset-7xfml" [a88dbfda-64b4-4b19-b555-d3c1125242f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0116 01:57:23.931842  566425 system_pods.go:89] "registry-78bhv" [1907fb2e-d297-4c24-82d4-d7d8736b29cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 01:57:23.931863  566425 system_pods.go:89] "registry-proxy-x6nzc" [64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 01:57:23.931870  566425 system_pods.go:89] "snapshot-controller-58dbcc7b99-5nprc" [12f6ab6e-50ff-4b3c-bfab-d6fb94e285ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 01:57:23.931877  566425 system_pods.go:89] "snapshot-controller-58dbcc7b99-p2tvv" [fd372217-c45c-4ddc-9410-bd643f29a822] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 01:57:23.931883  566425 system_pods.go:89] "storage-provisioner" [100fa3b9-4f94-4687-a675-a5e3c9f03dbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 01:57:23.931890  566425 system_pods.go:89] "tiller-deploy-7b677967b9-9f2w9" [068035ed-e81b-4a9b-921a-c3a8b21cdf49] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0116 01:57:23.931912  566425 system_pods.go:126] duration metric: took 9.324307ms to wait for k8s-apps to be running ...
	I0116 01:57:23.931957  566425 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 01:57:23.932019  566425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 01:57:24.212279  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:57:24.368849  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:24.377412  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:24.940335  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:24.940444  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:25.368933  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:25.379592  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:25.868754  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:25.878621  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:26.306567  566425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.378669289s)
	I0116 01:57:26.306583  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.738141603s)
	I0116 01:57:26.306648  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:26.306676  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:26.306688  566425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.374642152s)
	I0116 01:57:26.306730  566425 system_svc.go:56] duration metric: took 2.374770154s WaitForService to wait for kubelet.
	I0116 01:57:26.311697  566425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:57:26.306744  566425 kubeadm.go:581] duration metric: took 14.017442612s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 01:57:26.307089  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:26.307145  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:26.311753  566425 node_conditions.go:102] verifying NodePressure condition ...
	I0116 01:57:26.311767  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:26.313683  566425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 01:57:26.315531  566425 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 01:57:26.313691  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:26.315576  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:26.315609  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 01:57:26.315954  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:26.315973  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:26.315990  566425 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-874655"
	I0116 01:57:26.317744  566425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 01:57:26.320439  566425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 01:57:26.331979  566425 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 01:57:26.332022  566425 node_conditions.go:123] node cpu capacity is 2
	I0116 01:57:26.332036  566425 node_conditions.go:105] duration metric: took 20.259372ms to run NodePressure ...
	I0116 01:57:26.332052  566425 start.go:228] waiting for startup goroutines ...
	I0116 01:57:26.394891  566425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 01:57:26.394931  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:26.407931  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:26.410210  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:26.484304  566425 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 01:57:26.484341  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 01:57:26.606920  566425 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 01:57:26.606952  566425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 01:57:26.635318  566425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 01:57:26.826777  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:26.875543  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:26.881271  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:27.327067  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:27.370793  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:27.377574  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:27.594231  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.381880601s)
	I0116 01:57:27.594323  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:27.594349  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:27.594774  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:27.594797  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:27.594809  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:27.594818  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:27.595131  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:27.595143  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:27.595218  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:27.826575  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:27.868240  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:27.877480  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:28.335458  566425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.700081081s)
	I0116 01:57:28.335519  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:28.335535  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:28.335893  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:28.335921  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:28.335933  566425 main.go:141] libmachine: Making call to close driver server
	I0116 01:57:28.335942  566425 main.go:141] libmachine: (addons-874655) Calling .Close
	I0116 01:57:28.336183  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:28.336282  566425 main.go:141] libmachine: (addons-874655) DBG | Closing plugin on server side
	I0116 01:57:28.336310  566425 main.go:141] libmachine: Successfully made call to close driver server
	I0116 01:57:28.336344  566425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 01:57:28.337762  566425 addons.go:470] Verifying addon gcp-auth=true in "addons-874655"
	I0116 01:57:28.339739  566425 out.go:177] * Verifying gcp-auth addon...
	I0116 01:57:28.342439  566425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 01:57:28.353044  566425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 01:57:28.353076  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:28.372016  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:28.378848  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:28.826902  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:28.846156  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:28.869018  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:28.878007  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:29.327657  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:29.346741  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:29.369424  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:29.377463  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:29.826548  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:29.848185  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:29.868667  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:29.877523  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:30.326317  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:30.345893  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:30.369651  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:30.377244  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:30.826990  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:30.846938  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:30.869394  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:30.880364  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:31.550020  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:31.550206  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:31.552096  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:31.553521  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:31.827199  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:31.847647  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:31.868893  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:31.878316  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:32.326332  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:32.346097  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:32.369438  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:32.376922  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:32.826653  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:32.846890  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:32.876335  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:32.880321  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:33.327098  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:33.346915  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:33.369814  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:33.377776  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:33.826435  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:33.846871  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:33.869332  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:33.877700  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:34.327300  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:34.346787  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:34.369621  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:34.377388  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:34.827972  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:34.846795  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:34.870300  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:34.878315  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:35.327546  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:35.346741  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:35.372009  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:35.378773  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:35.827349  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:35.846030  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:35.872217  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:35.876514  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:36.327763  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:36.346807  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:36.370020  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:36.377859  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:36.827043  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:36.871577  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:36.872864  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:36.876718  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:37.327183  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:37.347085  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:37.370571  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:37.377078  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:37.831459  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:37.849823  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:37.877503  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:37.885031  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:38.326817  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:38.348166  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:38.368685  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:38.379083  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:38.826819  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:38.847408  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:38.869304  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:38.876983  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:39.327001  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:39.347171  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:39.369369  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:39.377341  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:39.827045  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:39.847562  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:39.870281  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:39.878578  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:40.507201  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:40.508136  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:40.508185  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:40.509623  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:40.827278  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:40.847153  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:40.868886  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:40.877618  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:41.327075  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:41.346449  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:41.373363  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:41.379870  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:41.828552  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:41.847458  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:41.873473  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:41.877248  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:42.326312  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:42.347193  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:42.368906  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:42.377784  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:42.826927  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:42.846897  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:42.869743  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:42.878029  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:43.328749  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:43.350034  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:43.371231  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:43.377394  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:43.827359  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:43.846802  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:43.871598  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:43.880457  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:44.332056  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:44.347280  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:44.370805  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:44.379951  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:44.826935  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:44.847400  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:44.869336  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:44.877949  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:45.326295  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:45.346684  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:45.369763  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:45.377568  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:45.828194  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:45.847772  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:45.870912  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:45.877775  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:46.327865  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:46.347252  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:46.369165  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:46.378448  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:46.828078  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:46.846902  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:46.870500  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:46.877112  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:47.327171  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:47.349576  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:47.369204  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:47.379192  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:47.826694  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:47.846016  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:47.872807  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:47.877359  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:48.326897  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:48.347130  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:48.368575  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:48.377475  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:48.826742  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:48.847107  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:48.868824  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:48.878232  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:49.327262  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:49.346979  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:49.369550  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:49.377566  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:49.827879  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:49.847441  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:49.872597  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:49.876930  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:50.326143  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:50.346786  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:50.369316  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:50.376914  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:50.826498  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:50.847523  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:50.870058  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:50.878052  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:51.326683  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:51.346734  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:51.373209  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:51.379572  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:51.826741  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:51.846687  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:51.876765  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:51.878888  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:52.326835  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:52.347042  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:52.368840  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:52.377886  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:52.827019  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:52.847874  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:52.870754  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:52.877984  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:53.333753  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:53.346581  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:53.369189  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:53.377867  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:53.827107  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:53.846814  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:53.869532  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:53.878916  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:54.326879  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:54.346562  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:54.369943  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:54.378661  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:54.826865  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:54.846531  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:54.869132  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:54.878344  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:55.326772  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:55.348280  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:55.373492  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:55.378996  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:55.826038  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:55.847199  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:55.872693  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:55.878943  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:56.327411  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:56.347273  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:56.369416  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:56.377275  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:56.827129  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:56.847093  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:56.869144  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:56.878479  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:57.327285  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:57.347458  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:57.370282  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:57.378773  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:57.827392  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:57.846931  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:57.873880  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:57.878037  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:58.327684  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:58.347992  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:58.369660  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:58.377700  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:58.828009  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:58.846738  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:58.869217  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:58.877351  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:59.326961  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:59.347096  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:59.368731  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:59.377915  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:57:59.827074  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:57:59.848027  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:57:59.873700  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:57:59.878572  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:00.326990  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:00.346858  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:00.370397  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:00.378163  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:00.837627  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:00.847340  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:00.869379  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:00.877183  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:01.554061  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:01.554678  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:01.554918  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:01.556563  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:01.828761  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:01.846924  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:01.870165  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:01.877991  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:02.327235  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:02.347576  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:02.369644  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:02.377520  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:02.960586  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:02.961240  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:02.964222  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:02.964854  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:03.326956  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:03.347140  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:03.369163  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:03.377821  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:03.826881  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:03.847161  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:03.870179  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:03.878652  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:04.328744  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:04.346469  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:04.370028  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:04.380414  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:04.826602  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:04.846590  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:04.869198  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:04.877943  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:05.326816  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:05.346866  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:05.369668  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:05.377728  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:05.826544  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:05.847155  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:05.869788  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:05.877572  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:06.327355  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:06.346058  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:06.370969  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:06.377462  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:06.828428  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:06.846929  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:06.869314  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:06.877378  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:07.326429  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:07.347087  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:07.368767  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:07.378033  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:07.828186  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:07.846721  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:07.869892  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:58:07.877496  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:08.325841  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:08.347151  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:08.372284  566425 kapi.go:107] duration metric: took 44.508551981s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 01:58:08.379266  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:08.827372  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:08.846217  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:08.877544  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:09.327259  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:09.346165  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:09.377715  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:09.826139  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:09.847311  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:09.877780  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:10.327348  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:10.346268  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:10.378423  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:10.826940  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:10.846900  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:10.878305  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:11.326737  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:11.346807  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:11.377980  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:11.827998  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:11.847730  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:11.879401  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:12.326655  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:12.346399  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:12.377811  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:12.827946  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:12.846502  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:12.877917  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:13.326117  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:13.347023  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:13.379190  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:13.826421  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:13.846441  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:13.878290  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:14.334619  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:14.348261  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:14.382897  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:14.828048  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:14.848092  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:14.878925  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:15.331661  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:15.347240  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:15.377763  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:16.224361  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:16.224933  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:16.225433  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:16.332355  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:16.347411  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:16.379743  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:16.827361  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:16.847111  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:16.878138  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:17.325973  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:17.346667  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:17.380093  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:17.826764  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:17.846821  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:17.878973  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:18.326864  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:18.347840  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:18.378251  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:18.826493  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:18.846847  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:18.878094  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:19.327730  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:19.347263  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:19.377750  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:19.832359  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:19.846300  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:19.878434  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:20.335871  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:20.355667  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:20.378888  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:20.826564  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:20.846853  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:20.878636  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:21.327048  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:21.357453  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:21.377728  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:21.830254  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:21.847851  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:21.878891  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:22.327169  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:22.346547  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:22.377973  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:22.827298  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:22.847018  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:22.878411  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:23.327202  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:23.347316  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:23.379185  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:23.830999  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:23.846995  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:23.879103  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:24.327554  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:24.347401  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:24.377508  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:24.827258  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:24.847968  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:24.878913  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:25.327389  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:25.349791  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:25.378850  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:25.826002  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:25.847096  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:25.878176  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:26.328704  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:26.347122  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:26.377318  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:26.827153  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:26.847530  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:26.878210  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:27.327778  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:27.346596  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:27.378251  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:27.826654  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:27.846763  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:27.880330  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:28.326422  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:28.347094  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:28.381626  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:28.827580  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:28.846857  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:28.878535  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:29.327695  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:29.346444  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:29.378718  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:29.827687  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:29.847132  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:29.878172  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:30.340770  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:58:30.346992  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:30.381311  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:30.882783  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:30.883280  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:30.883622  566425 kapi.go:107] duration metric: took 1m4.563183235s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 01:58:31.347231  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:31.377530  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:31.848164  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:31.878405  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:32.346678  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:32.378007  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:32.846456  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:32.885488  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:33.347073  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:33.377994  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:33.847220  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:33.877860  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:34.346976  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:34.378443  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:34.847152  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:34.877920  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:35.347217  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:35.377455  566425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:58:35.846850  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:35.877950  566425 kapi.go:107] duration metric: took 1m12.005254294s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 01:58:36.346448  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:36.846825  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:37.347345  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:37.847705  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:38.346284  566425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:58:38.846594  566425 kapi.go:107] duration metric: took 1m10.504149427s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 01:58:38.848651  566425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-874655 cluster.
	I0116 01:58:38.850649  566425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 01:58:38.852308  566425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 01:58:38.854268  566425 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, inspektor-gadget, helm-tiller, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 01:58:38.855713  566425 addons.go:505] enable addons completed in 1m27.072045133s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd inspektor-gadget helm-tiller default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 01:58:38.855764  566425 start.go:233] waiting for cluster config update ...
	I0116 01:58:38.855789  566425 start.go:242] writing updated cluster config ...
	I0116 01:58:38.856103  566425 ssh_runner.go:195] Run: rm -f paused
	I0116 01:58:38.909543  566425 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 01:58:38.911484  566425 out.go:177] * Done! kubectl is now configured to use "addons-874655" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e8bf9ea1434a2       6d2a98b274382       19 seconds ago       Running             gcp-auth                                 0                   ac46bcf874b60       gcp-auth-d4c87556c-k54ds
	624b8ecff4e59       311f90a3747fd       22 seconds ago       Running             controller                               0                   ea2dbc82b635c       ingress-nginx-controller-69cff4fd79-646q9
	16947755a8498       738351fd438f0       28 seconds ago       Running             csi-snapshotter                          0                   243d50af10e16       csi-hostpathplugin-57ghf
	663d110f79a98       931dbfd16f87c       30 seconds ago       Running             csi-provisioner                          0                   243d50af10e16       csi-hostpathplugin-57ghf
	df9bc4ef29d64       e899260153aed       32 seconds ago       Running             liveness-probe                           0                   243d50af10e16       csi-hostpathplugin-57ghf
	99c04aa208a9b       e255e073c508c       33 seconds ago       Running             hostpath                                 0                   243d50af10e16       csi-hostpathplugin-57ghf
	a5e850fb620ec       88ef14a257f42       34 seconds ago       Running             node-driver-registrar                    0                   243d50af10e16       csi-hostpathplugin-57ghf
	f578c3ea0ac88       1ebff0f9671bc       36 seconds ago       Exited              patch                                    0                   c0a5a02d7dae0       gcp-auth-certs-patch-n6q2g
	9fbff063765ae       1ebff0f9671bc       37 seconds ago       Exited              create                                   0                   9b24bbca533ac       gcp-auth-certs-create-dj9qn
	e0b01d710aeef       a1ed5895ba635       37 seconds ago       Running             csi-external-health-monitor-controller   0                   243d50af10e16       csi-hostpathplugin-57ghf
	7e2bc7e862303       59cbb42146a37       38 seconds ago       Running             csi-attacher                             0                   bd25256591ec1       csi-hostpath-attacher-0
	800b4b42fb15c       19a639eda60f0       40 seconds ago       Running             csi-resizer                              0                   0f9dbeac41f81       csi-hostpath-resizer-0
	2ed5abe86c2bf       1ebff0f9671bc       42 seconds ago       Exited              patch                                    0                   65f7281bd2541       ingress-nginx-admission-patch-2sqsd
	a29e387026118       1ebff0f9671bc       42 seconds ago       Exited              create                                   0                   06fed57ddb7e7       ingress-nginx-admission-create-kkrqg
	6f22dd933a378       31de47c733c91       44 seconds ago       Running             yakd                                     0                   82047b5e74565       yakd-dashboard-9947fc6bf-mjs9x
	d66d56ecb50b4       aa61ee9c70bc4       49 seconds ago       Running             volume-snapshot-controller               0                   4f9dfd345dd81       snapshot-controller-58dbcc7b99-p2tvv
	ed9f7c3a5daec       aa61ee9c70bc4       49 seconds ago       Running             volume-snapshot-controller               0                   c886c1c2353ac       snapshot-controller-58dbcc7b99-5nprc
	704f43104f0e0       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   900d0fbac6047       local-path-provisioner-78b46b4d5c-czqbm
	4930fa202417a       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   ff00342132c19       kube-ingress-dns-minikube
	01577e925846a       3f39089e90831       About a minute ago   Running             tiller                                   0                   22769d3b61423       tiller-deploy-7b677967b9-9f2w9
	a0c85c50864b8       754854eab8c1c       About a minute ago   Running             cloud-spanner-emulator                   0                   68694b24b53a8       cloud-spanner-emulator-64c8c85f65-pgclx
	0185312efa995       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   ac2b6b4b0892d       storage-provisioner
	d495d116f12b3       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   4437f950c4b17       coredns-5dd5756b68-jxfvn
	b6c383016f6fe       83f6cc407eed8       About a minute ago   Running             kube-proxy                               0                   678ec991a98f4       kube-proxy-8xv7c
	e77edeeac5c6c       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   0e74b51480678       etcd-addons-874655
	d3b48cbdf9598       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler                           0                   292d9e377827e       kube-scheduler-addons-874655
	a401d2fb30cde       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver                           0                   0b25eaf3dceb4       kube-apiserver-addons-874655
	138e7cf6410ad       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager                  0                   76b49b9b1666a       kube-controller-manager-addons-874655
	
	
	==> containerd <==
	-- Journal begins at Tue 2024-01-16 01:56:25 UTC, ends at Tue 2024-01-16 01:58:58 UTC. --
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.686935896Z" level=info msg="StopPodSandbox for \"5b3a0427d6f1639529240f727fdb07e2b8a97c05d0a975c12b94f8d40a6194ec\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.687033335Z" level=info msg="Container to stop \"7718db09554acaabeb48b7f55e0e651bc9d79eaabadb19b3f48b53ce1f8b3037\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.744687952Z" level=info msg="shim disconnected" id=5b3a0427d6f1639529240f727fdb07e2b8a97c05d0a975c12b94f8d40a6194ec namespace=k8s.io
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.744756682Z" level=warning msg="cleaning up after shim disconnected" id=5b3a0427d6f1639529240f727fdb07e2b8a97c05d0a975c12b94f8d40a6194ec namespace=k8s.io
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.744768459Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.807833839Z" level=info msg="TearDown network for sandbox \"5b3a0427d6f1639529240f727fdb07e2b8a97c05d0a975c12b94f8d40a6194ec\" successfully"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.807895370Z" level=info msg="StopPodSandbox for \"5b3a0427d6f1639529240f727fdb07e2b8a97c05d0a975c12b94f8d40a6194ec\" returns successfully"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.852969416Z" level=info msg="RemoveContainer for \"4ab97d9187a30bb9be162a996ef2ba6f10ac58dd67aa83fa9cd6d97c8c62c7c0\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.877977300Z" level=info msg="RemoveContainer for \"4ab97d9187a30bb9be162a996ef2ba6f10ac58dd67aa83fa9cd6d97c8c62c7c0\" returns successfully"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.884225582Z" level=error msg="ContainerStatus for \"4ab97d9187a30bb9be162a996ef2ba6f10ac58dd67aa83fa9cd6d97c8c62c7c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ab97d9187a30bb9be162a996ef2ba6f10ac58dd67aa83fa9cd6d97c8c62c7c0\": not found"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.888030844Z" level=info msg="StopPodSandbox for \"7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.888159155Z" level=info msg="Container to stop \"c3bea67148c4bf08c382a847bed299c160374f0ac76a903ad2af44589a0683db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.890770645Z" level=info msg="RemoveContainer for \"7718db09554acaabeb48b7f55e0e651bc9d79eaabadb19b3f48b53ce1f8b3037\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.955199010Z" level=info msg="RemoveContainer for \"7718db09554acaabeb48b7f55e0e651bc9d79eaabadb19b3f48b53ce1f8b3037\" returns successfully"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.967591677Z" level=error msg="ContainerStatus for \"7718db09554acaabeb48b7f55e0e651bc9d79eaabadb19b3f48b53ce1f8b3037\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7718db09554acaabeb48b7f55e0e651bc9d79eaabadb19b3f48b53ce1f8b3037\": not found"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.976690196Z" level=info msg="RemoveContainer for \"8cdab95870928b77f3d3419ba215529a377dfae706207c702a56422a96e57fe8\""
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.986040124Z" level=info msg="RemoveContainer for \"8cdab95870928b77f3d3419ba215529a377dfae706207c702a56422a96e57fe8\" returns successfully"
	Jan 16 01:58:56 addons-874655 containerd[688]: time="2024-01-16T01:58:56.988069988Z" level=error msg="ContainerStatus for \"8cdab95870928b77f3d3419ba215529a377dfae706207c702a56422a96e57fe8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cdab95870928b77f3d3419ba215529a377dfae706207c702a56422a96e57fe8\": not found"
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.025249351Z" level=info msg="shim disconnected" id=7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3 namespace=k8s.io
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.025987302Z" level=warning msg="cleaning up after shim disconnected" id=7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3 namespace=k8s.io
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.026107208Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.100235018Z" level=info msg="TearDown network for sandbox \"7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3\" successfully"
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.100453807Z" level=info msg="StopPodSandbox for \"7010e3c493ae1d4af36adeaef2828f0f9da1b15cdf9f78f705d7c137e0446ba3\" returns successfully"
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.894072640Z" level=info msg="RemoveContainer for \"c3bea67148c4bf08c382a847bed299c160374f0ac76a903ad2af44589a0683db\""
	Jan 16 01:58:57 addons-874655 containerd[688]: time="2024-01-16T01:58:57.901788793Z" level=info msg="RemoveContainer for \"c3bea67148c4bf08c382a847bed299c160374f0ac76a903ad2af44589a0683db\" returns successfully"
	
	
	==> coredns [d495d116f12b362205c7493e1405782f1b5d45d0a8b74b6243901138efc14ac0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38387 - 26635 "HINFO IN 1158326830966261383.6674419566022158997. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023903107s
	[INFO] 10.244.0.21:43758 - 20002 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000336545s
	[INFO] 10.244.0.21:34087 - 35895 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000278279s
	[INFO] 10.244.0.21:37853 - 60580 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000217331s
	[INFO] 10.244.0.21:44808 - 18248 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105748s
	[INFO] 10.244.0.21:33100 - 36944 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000169649s
	[INFO] 10.244.0.21:57995 - 11070 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000273225s
	[INFO] 10.244.0.21:46465 - 53244 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001159501s
	[INFO] 10.244.0.21:48068 - 46442 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002421783s
	[INFO] 10.244.0.22:33599 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290013s
	[INFO] 10.244.0.22:35753 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133867s
	
	
	==> describe nodes <==
	Name:               addons-874655
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-874655
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-874655
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T01_56_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-874655
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-874655"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 01:56:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-874655
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 01:58:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 01:58:31 +0000   Tue, 16 Jan 2024 01:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 01:58:31 +0000   Tue, 16 Jan 2024 01:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 01:58:31 +0000   Tue, 16 Jan 2024 01:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 01:58:31 +0000   Tue, 16 Jan 2024 01:57:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    addons-874655
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 42c610653047425ab8ceea5e5ec2da35
	  System UUID:                42c61065-3047-425a-b8ce-ea5e5ec2da35
	  Boot ID:                    a923e7f2-ea32-4012-a8d5-6d85625d0287
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-pgclx      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  default                     test-local-path                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gcp-auth                    gcp-auth-d4c87556c-k54ds                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-646q9    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         95s
	  kube-system                 coredns-5dd5756b68-jxfvn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     107s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpathplugin-57ghf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 etcd-addons-874655                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-addons-874655                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-addons-874655        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-8xv7c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-addons-874655                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 snapshot-controller-58dbcc7b99-5nprc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 snapshot-controller-58dbcc7b99-p2tvv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 tiller-deploy-7b677967b9-9f2w9               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  local-path-storage          local-path-provisioner-78b46b4d5c-czqbm      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-mjs9x               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node addons-874655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node addons-874655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node addons-874655 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node addons-874655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node addons-874655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node addons-874655 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                118s                 kubelet          Node addons-874655 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node addons-874655 event: Registered Node addons-874655 in Controller
	
	
	==> dmesg <==
	[  +0.095605] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.457675] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.969427] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139323] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.055561] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.039138] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.104628] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.135503] systemd-fstab-generator[581]: Ignoring "noauto" for root device
	[  +0.098874] systemd-fstab-generator[592]: Ignoring "noauto" for root device
	[  +0.245513] systemd-fstab-generator[619]: Ignoring "noauto" for root device
	[  +6.157361] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +4.281763] systemd-fstab-generator[843]: Ignoring "noauto" for root device
	[  +8.779125] systemd-fstab-generator[1208]: Ignoring "noauto" for root device
	[Jan16 01:57] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.535040] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.110379] kauditd_printk_skb: 30 callbacks suppressed
	[ +10.747331] kauditd_printk_skb: 4 callbacks suppressed
	[Jan16 01:58] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.838906] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.521844] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.372005] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.896951] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [e77edeeac5c6cbe28cfdcf68a4c17e7add5effa37e0d36432844374b617f58cb] <==
	{"level":"info","ts":"2024-01-16T01:58:14.123642Z","caller":"traceutil/trace.go:171","msg":"trace[71218723] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"113.083388ms","start":"2024-01-16T01:58:14.009855Z","end":"2024-01-16T01:58:14.122939Z","steps":["trace[71218723] 'process raft request'  (duration: 112.828315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:16.215156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.374379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:1823"}
	{"level":"info","ts":"2024-01-16T01:58:16.215231Z","caller":"traceutil/trace.go:171","msg":"trace[164172009] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:1020; }","duration":"311.467435ms","start":"2024-01-16T01:58:15.90375Z","end":"2024-01-16T01:58:16.215217Z","steps":["trace[164172009] 'range keys from in-memory index tree'  (duration: 311.296553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:16.215237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.955725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81848"}
	{"level":"warn","ts":"2024-01-16T01:58:16.215261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:58:15.903737Z","time spent":"311.516907ms","remote":"127.0.0.1:39682","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1845,"request content":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" "}
	{"level":"info","ts":"2024-01-16T01:58:16.215373Z","caller":"traceutil/trace.go:171","msg":"trace[2083700781] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1020; }","duration":"396.076261ms","start":"2024-01-16T01:58:15.819236Z","end":"2024-01-16T01:58:16.215312Z","steps":["trace[2083700781] 'range keys from in-memory index tree'  (duration: 395.740778ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:16.215403Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:58:15.819221Z","time spent":"396.174737ms","remote":"127.0.0.1:39704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81870,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-01-16T01:58:16.215588Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.668118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-16T01:58:16.215609Z","caller":"traceutil/trace.go:171","msg":"trace[661959869] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1020; }","duration":"373.692285ms","start":"2024-01-16T01:58:15.841911Z","end":"2024-01-16T01:58:16.215603Z","steps":["trace[661959869] 'range keys from in-memory index tree'  (duration: 373.57896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:16.215625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:58:15.841891Z","time spent":"373.729079ms","remote":"127.0.0.1:39704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10597,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-01-16T01:58:16.215814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.772769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-16T01:58:16.215832Z","caller":"traceutil/trace.go:171","msg":"trace[845253016] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1020; }","duration":"343.792088ms","start":"2024-01-16T01:58:15.872035Z","end":"2024-01-16T01:58:16.215827Z","steps":["trace[845253016] 'range keys from in-memory index tree'  (duration: 343.647238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:16.215849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:58:15.87202Z","time spent":"343.824631ms","remote":"127.0.0.1:39704","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13510,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-01-16T01:58:21.742469Z","caller":"traceutil/trace.go:171","msg":"trace[1523905726] transaction","detail":"{read_only:false; response_revision:1069; number_of_response:1; }","duration":"240.204104ms","start":"2024-01-16T01:58:21.502242Z","end":"2024-01-16T01:58:21.742447Z","steps":["trace[1523905726] 'process raft request'  (duration: 237.321202ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:58:30.639822Z","caller":"traceutil/trace.go:171","msg":"trace[750266840] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1167; }","duration":"215.542751ms","start":"2024-01-16T01:58:30.424265Z","end":"2024-01-16T01:58:30.639808Z","steps":["trace[750266840] 'read index received'  (duration: 215.396327ms)","trace[750266840] 'applied index is now lower than readState.Index'  (duration: 146.048µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T01:58:30.639952Z","caller":"traceutil/trace.go:171","msg":"trace[1811847833] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"260.183714ms","start":"2024-01-16T01:58:30.379761Z","end":"2024-01-16T01:58:30.639944Z","steps":["trace[1811847833] 'process raft request'  (duration: 259.939832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:30.640081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.842144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-16T01:58:30.640107Z","caller":"traceutil/trace.go:171","msg":"trace[1420506871] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1131; }","duration":"215.881431ms","start":"2024-01-16T01:58:30.424218Z","end":"2024-01-16T01:58:30.6401Z","steps":["trace[1420506871] 'agreement among raft nodes before linearized reading'  (duration: 215.811312ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:58:30.858381Z","caller":"traceutil/trace.go:171","msg":"trace[1528159913] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"211.854915ms","start":"2024-01-16T01:58:30.64651Z","end":"2024-01-16T01:58:30.858365Z","steps":["trace[1528159913] 'process raft request'  (duration: 207.387752ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:58:30.864878Z","caller":"traceutil/trace.go:171","msg":"trace[2139513759] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"211.068793ms","start":"2024-01-16T01:58:30.653794Z","end":"2024-01-16T01:58:30.864863Z","steps":["trace[2139513759] 'process raft request'  (duration: 210.42822ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:58:40.004815Z","caller":"traceutil/trace.go:171","msg":"trace[40018877] linearizableReadLoop","detail":"{readStateIndex:1228; appliedIndex:1227; }","duration":"144.683551ms","start":"2024-01-16T01:58:39.860118Z","end":"2024-01-16T01:58:40.004801Z","steps":["trace[40018877] 'read index received'  (duration: 144.530691ms)","trace[40018877] 'applied index is now lower than readState.Index'  (duration: 152.388µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T01:58:40.005163Z","caller":"traceutil/trace.go:171","msg":"trace[2072463945] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"345.598526ms","start":"2024-01-16T01:58:39.659547Z","end":"2024-01-16T01:58:40.005145Z","steps":["trace[2072463945] 'process raft request'  (duration: 345.141975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:58:40.005641Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:58:39.659529Z","time spent":"345.816812ms","remote":"127.0.0.1:39704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":9161,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-n8nxf\" mod_revision:1095 > success:<request_put:<key:\"/registry/pods/gadget/gadget-n8nxf\" value_size:9119 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-n8nxf\" > >"}
	{"level":"warn","ts":"2024-01-16T01:58:40.006151Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.042524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-874655\" ","response":"range_response_count:1 size:7200"}
	{"level":"info","ts":"2024-01-16T01:58:40.006458Z","caller":"traceutil/trace.go:171","msg":"trace[226775157] range","detail":"{range_begin:/registry/minions/addons-874655; range_end:; response_count:1; response_revision:1189; }","duration":"146.353772ms","start":"2024-01-16T01:58:39.860092Z","end":"2024-01-16T01:58:40.006446Z","steps":["trace[226775157] 'agreement among raft nodes before linearized reading'  (duration: 146.025103ms)"],"step_count":1}
	
	
	==> gcp-auth [e8bf9ea1434a26dbc40b7355ac426fd430754cf0d247fdfe5ad11dac1d8158cd] <==
	2024/01/16 01:58:38 GCP Auth Webhook started!
	2024/01/16 01:58:50 Ready to marshal response ...
	2024/01/16 01:58:50 Ready to write response ...
	2024/01/16 01:58:52 Ready to marshal response ...
	2024/01/16 01:58:52 Ready to write response ...
	2024/01/16 01:58:52 Ready to marshal response ...
	2024/01/16 01:58:52 Ready to write response ...
	
	
	==> kernel <==
	 01:58:58 up 2 min,  0 users,  load average: 2.59, 1.45, 0.57
	Linux addons-874655 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a401d2fb30cdee44bc8f2d539cd9e4a6b44f3dd6933a62fac33ce544e6a90d21] <==
	I0116 01:57:23.565329       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.101.252.84"}
	I0116 01:57:23.661187       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0116 01:57:24.997769       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 01:57:25.922061       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.68.207"}
	I0116 01:57:25.956173       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0116 01:57:26.178219       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.31.113"}
	W0116 01:57:27.419135       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 01:57:28.120657       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.73.91"}
	I0116 01:57:55.919929       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 01:57:57.023772       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.157.40:443: connect: connection refused
	E0116 01:57:57.025744       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.157.40:443: connect: connection refused
	W0116 01:57:57.027455       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 01:57:57.027689       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0116 01:57:57.029746       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.157.40:443: connect: connection refused
	I0116 01:57:57.030646       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 01:57:57.056820       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.157.40:443: connect: connection refused
	E0116 01:57:57.099553       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.157.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.157.40:443: connect: connection refused
	I0116 01:57:57.249886       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 01:58:45.550820       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0116 01:58:45.557132       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0116 01:58:46.675590       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 01:58:58.046006       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [138e7cf6410ad1a952ff22471fbad10e7fc87031924284c46e86f2b6c7c9cb7c] <==
	I0116 01:58:38.673191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="18.371325ms"
	I0116 01:58:38.673763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="65.915µs"
	I0116 01:58:39.135347       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:58:41.084428       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:58:45.746261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="10.661µs"
	E0116 01:58:46.677623       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 01:58:47.798726       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 01:58:47.798834       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 01:58:47.927179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="25.920614ms"
	I0116 01:58:47.927719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="455.119µs"
	W0116 01:58:50.473633       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 01:58:50.473671       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 01:58:52.177221       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0116 01:58:52.330264       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:58:52.330996       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:58:54.021252       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:58:54.028419       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	W0116 01:58:54.033566       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 01:58:54.033780       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 01:58:54.069890       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:58:54.072822       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:58:55.686995       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.848µs"
	I0116 01:58:55.778424       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0116 01:58:56.084834       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:58:56.084892       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [b6c383016f6feb534657bde7edf8af0ffe2f72fd3f6cac8f6e51154801314ec8] <==
	I0116 01:57:12.650678       1 server_others.go:69] "Using iptables proxy"
	I0116 01:57:12.662010       1 node.go:141] Successfully retrieved node IP: 192.168.39.252
	I0116 01:57:12.874372       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 01:57:12.874413       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 01:57:12.883993       1 server_others.go:152] "Using iptables Proxier"
	I0116 01:57:12.884056       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 01:57:12.884228       1 server.go:846] "Version info" version="v1.28.4"
	I0116 01:57:12.887516       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 01:57:12.888504       1 config.go:188] "Starting service config controller"
	I0116 01:57:12.888524       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 01:57:12.888545       1 config.go:97] "Starting endpoint slice config controller"
	I0116 01:57:12.888549       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 01:57:12.892722       1 config.go:315] "Starting node config controller"
	I0116 01:57:12.892760       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 01:57:12.989389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 01:57:12.989492       1 shared_informer.go:318] Caches are synced for service config
	I0116 01:57:13.115409       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d3b48cbdf9598c266af4e49abf8b012e38f15d1a33f818222bc4ad874add39c0] <==
	W0116 01:56:56.871125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 01:56:56.871151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 01:56:56.948457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 01:56:56.948723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 01:56:56.973206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 01:56:56.973588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 01:56:56.991668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 01:56:56.992083       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 01:56:57.015917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 01:56:57.016226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 01:56:57.082338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 01:56:57.082365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 01:56:57.105452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 01:56:57.105500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 01:56:57.261757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 01:56:57.261846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 01:56:57.293938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 01:56:57.293984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 01:56:57.337040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 01:56:57.337092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 01:56:57.349056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 01:56:57.349105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 01:56:57.501070       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 01:56:57.501615       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 01:57:00.150409       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 01:56:25 UTC, ends at Tue 2024-01-16 01:58:58 UTC. --
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.139415    1215 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa567d99-ecd4-4bcb-8dba-42cce4221e16-script" (OuterVolumeSpecName: "script") pod "aa567d99-ecd4-4bcb-8dba-42cce4221e16" (UID: "aa567d99-ecd4-4bcb-8dba-42cce4221e16"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.144088    1215 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa567d99-ecd4-4bcb-8dba-42cce4221e16-kube-api-access-q8sxp" (OuterVolumeSpecName: "kube-api-access-q8sxp") pod "aa567d99-ecd4-4bcb-8dba-42cce4221e16" (UID: "aa567d99-ecd4-4bcb-8dba-42cce4221e16"). InnerVolumeSpecName "kube-api-access-q8sxp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.239350    1215 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/aa567d99-ecd4-4bcb-8dba-42cce4221e16-script\") on node \"addons-874655\" DevicePath \"\""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.239404    1215 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aa567d99-ecd4-4bcb-8dba-42cce4221e16-gcp-creds\") on node \"addons-874655\" DevicePath \"\""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.239416    1215 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/aa567d99-ecd4-4bcb-8dba-42cce4221e16-data\") on node \"addons-874655\" DevicePath \"\""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.239429    1215 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q8sxp\" (UniqueName: \"kubernetes.io/projected/aa567d99-ecd4-4bcb-8dba-42cce4221e16-kube-api-access-q8sxp\") on node \"addons-874655\" DevicePath \"\""
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.715435    1215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1907fb2e-d297-4c24-82d4-d7d8736b29cf" path="/var/lib/kubelet/pods/1907fb2e-d297-4c24-82d4-d7d8736b29cf/volumes"
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.716096    1215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed" path="/var/lib/kubelet/pods/64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed/volumes"
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.716800    1215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a88dbfda-64b4-4b19-b555-d3c1125242f9" path="/var/lib/kubelet/pods/a88dbfda-64b4-4b19-b555-d3c1125242f9/volumes"
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.717430    1215 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aa567d99-ecd4-4bcb-8dba-42cce4221e16" path="/var/lib/kubelet/pods/aa567d99-ecd4-4bcb-8dba-42cce4221e16/volumes"
	Jan 16 01:58:57 addons-874655 kubelet[1215]: I0116 01:58:57.891712    1215 scope.go:117] "RemoveContainer" containerID="c3bea67148c4bf08c382a847bed299c160374f0ac76a903ad2af44589a0683db"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340412    1215 topology_manager.go:215] "Topology Admit Handler" podUID="b3265e99-bad5-4210-8d5d-dcdba5402df4" podNamespace="default" podName="test-local-path"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: E0116 01:58:58.340492    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a88dbfda-64b4-4b19-b555-d3c1125242f9" containerName="nvidia-device-plugin-ctr"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: E0116 01:58:58.340505    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64f91c39-5fc4-4a97-b018-fa61d877c001" containerName="registry-test"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: E0116 01:58:58.340513    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1907fb2e-d297-4c24-82d4-d7d8736b29cf" containerName="registry"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: E0116 01:58:58.340520    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed" containerName="registry-proxy"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: E0116 01:58:58.340528    1215 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa567d99-ecd4-4bcb-8dba-42cce4221e16" containerName="helper-pod"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340563    1215 memory_manager.go:346] "RemoveStaleState removing state" podUID="a88dbfda-64b4-4b19-b555-d3c1125242f9" containerName="nvidia-device-plugin-ctr"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340570    1215 memory_manager.go:346] "RemoveStaleState removing state" podUID="64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed" containerName="registry-proxy"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340578    1215 memory_manager.go:346] "RemoveStaleState removing state" podUID="64f91c39-5fc4-4a97-b018-fa61d877c001" containerName="registry-test"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340586    1215 memory_manager.go:346] "RemoveStaleState removing state" podUID="aa567d99-ecd4-4bcb-8dba-42cce4221e16" containerName="helper-pod"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.340592    1215 memory_manager.go:346] "RemoveStaleState removing state" podUID="1907fb2e-d297-4c24-82d4-d7d8736b29cf" containerName="registry"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.447511    1215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b3265e99-bad5-4210-8d5d-dcdba5402df4-gcp-creds\") pod \"test-local-path\" (UID: \"b3265e99-bad5-4210-8d5d-dcdba5402df4\") " pod="default/test-local-path"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.447731    1215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njw75\" (UniqueName: \"kubernetes.io/projected/b3265e99-bad5-4210-8d5d-dcdba5402df4-kube-api-access-njw75\") pod \"test-local-path\" (UID: \"b3265e99-bad5-4210-8d5d-dcdba5402df4\") " pod="default/test-local-path"
	Jan 16 01:58:58 addons-874655 kubelet[1215]: I0116 01:58:58.448043    1215 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5fe80822-5bac-479e-b226-db3831abc964\" (UniqueName: \"kubernetes.io/host-path/b3265e99-bad5-4210-8d5d-dcdba5402df4-pvc-5fe80822-5bac-479e-b226-db3831abc964\") pod \"test-local-path\" (UID: \"b3265e99-bad5-4210-8d5d-dcdba5402df4\") " pod="default/test-local-path"
	
	
	==> storage-provisioner [0185312efa995d56c35f389356765d2ec3d516115482241acb480471655499b8] <==
	I0116 01:57:25.094077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 01:57:25.150567       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 01:57:25.150608       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 01:57:25.300043       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 01:57:25.310362       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-874655_5444d8fd-b96b-4714-92a7-bf56e0bcbdaf!
	I0116 01:57:25.586865       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72f4f89e-1aba-4eb9-a4d8-d679c4676ab7", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-874655_5444d8fd-b96b-4714-92a7-bf56e0bcbdaf became leader
	I0116 01:57:25.715423       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-874655_5444d8fd-b96b-4714-92a7-bf56e0bcbdaf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-874655 -n addons-874655
helpers_test.go:261: (dbg) Run:  kubectl --context addons-874655 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-kkrqg ingress-nginx-admission-patch-2sqsd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-874655 describe pod test-local-path ingress-nginx-admission-create-kkrqg ingress-nginx-admission-patch-2sqsd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-874655 describe pod test-local-path ingress-nginx-admission-create-kkrqg ingress-nginx-admission-patch-2sqsd: exit status 1 (79.097293ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-874655/192.168.39.252
	Start Time:       Tue, 16 Jan 2024 01:58:58 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-njw75 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-njw75:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/test-local-path to addons-874655
	  Normal  Pulling    0s    kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kkrqg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2sqsd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-874655 describe pod test-local-path ingress-nginx-admission-create-kkrqg ingress-nginx-admission-patch-2sqsd: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.98s)

                                                
                                    

Test pass (278/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.54
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.16
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 14.67
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.16
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 19.65
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.65
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.16
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.61
31 TestOffline 91.03
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 145.86
38 TestAddons/parallel/Registry 16.95
39 TestAddons/parallel/Ingress 21.39
40 TestAddons/parallel/InspektorGadget 12
41 TestAddons/parallel/MetricsServer 7.13
42 TestAddons/parallel/HelmTiller 16.77
44 TestAddons/parallel/CSI 81.53
46 TestAddons/parallel/CloudSpanner 5.72
47 TestAddons/parallel/LocalPath 12.22
48 TestAddons/parallel/NvidiaDevicePlugin 5.71
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 92.59
54 TestCertOptions 72.69
55 TestCertExpiration 243.15
57 TestForceSystemdFlag 81.81
58 TestForceSystemdEnv 69.45
60 TestKVMDriverInstallOrUpdate 5.55
64 TestErrorSpam/setup 47.76
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.84
67 TestErrorSpam/pause 1.6
68 TestErrorSpam/unpause 1.67
69 TestErrorSpam/stop 1.49
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.8
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.61
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.97
81 TestFunctional/serial/CacheCmd/cache/add_local 2.59
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.31
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 40.85
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.53
92 TestFunctional/serial/LogsFileCmd 1.49
93 TestFunctional/serial/InvalidService 4.31
95 TestFunctional/parallel/ConfigCmd 0.48
96 TestFunctional/parallel/DashboardCmd 15.15
97 TestFunctional/parallel/DryRun 0.35
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 0.95
103 TestFunctional/parallel/ServiceCmdConnect 20.67
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 45.74
107 TestFunctional/parallel/SSHCmd 0.51
108 TestFunctional/parallel/CpCmd 1.7
109 TestFunctional/parallel/MySQL 25.85
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.63
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
119 TestFunctional/parallel/License 0.93
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.95
125 TestFunctional/parallel/ServiceCmd/DeployApp 21.19
135 TestFunctional/parallel/ServiceCmd/List 1.09
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 4.67
141 TestFunctional/parallel/ImageCommands/Setup 2.01
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
144 TestFunctional/parallel/ServiceCmd/Format 0.38
145 TestFunctional/parallel/ServiceCmd/URL 0.36
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.38
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
148 TestFunctional/parallel/ProfileCmd/profile_list 0.36
149 TestFunctional/parallel/MountCmd/any-port 9.89
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.35
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.51
153 TestFunctional/parallel/MountCmd/specific-port 2.01
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.57
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.55
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.14
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 82.84
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.45
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 43.55
172 TestJSONOutput/start/Command 62.85
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.65
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.63
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 6.45
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.23
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 95.8
204 TestMountStart/serial/StartWithMountFirst 28.52
205 TestMountStart/serial/VerifyMountFirst 0.43
206 TestMountStart/serial/StartWithMountSecond 29.19
207 TestMountStart/serial/VerifyMountSecond 0.43
208 TestMountStart/serial/DeleteFirst 0.7
209 TestMountStart/serial/VerifyMountPostDelete 0.43
210 TestMountStart/serial/Stop 1.1
211 TestMountStart/serial/RestartStopped 22.73
212 TestMountStart/serial/VerifyMountPostStop 0.43
215 TestMultiNode/serial/FreshStart2Nodes 124.73
216 TestMultiNode/serial/DeployApp2Nodes 5.4
217 TestMultiNode/serial/PingHostFrom2Pods 0.97
218 TestMultiNode/serial/AddNode 44.16
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.24
221 TestMultiNode/serial/CopyFile 8.18
222 TestMultiNode/serial/StopNode 2.23
223 TestMultiNode/serial/StartAfterStop 26.92
224 TestMultiNode/serial/RestartKeepsNodes 310.24
225 TestMultiNode/serial/DeleteNode 1.83
226 TestMultiNode/serial/StopMultiNode 182.99
227 TestMultiNode/serial/RestartMultiNode 89.13
228 TestMultiNode/serial/ValidateNameConflict 55.18
233 TestPreload 351.85
235 TestScheduledStopUnix 120.18
239 TestRunningBinaryUpgrade 210.34
241 TestKubernetesUpgrade 205.99
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
248 TestPause/serial/Start 72.45
254 TestNoKubernetes/serial/StartWithK8s 99.79
255 TestPause/serial/SecondStartNoReconfiguration 34.18
256 TestNoKubernetes/serial/StartWithStopK8s 50.35
257 TestPause/serial/Pause 0.86
258 TestPause/serial/VerifyStatus 0.3
259 TestPause/serial/Unpause 0.77
260 TestPause/serial/PauseAgain 0.85
261 TestPause/serial/DeletePaused 1.44
262 TestPause/serial/VerifyDeletedResources 0.51
263 TestNoKubernetes/serial/Start 32.16
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
268 TestNoKubernetes/serial/ProfileList 22.34
273 TestNetworkPlugins/group/false 4.29
277 TestNoKubernetes/serial/Stop 1.49
278 TestNoKubernetes/serial/StartNoArgs 43.81
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
280 TestStoppedBinaryUpgrade/Setup 2.21
282 TestStartStop/group/old-k8s-version/serial/FirstStart 153.03
283 TestStoppedBinaryUpgrade/Upgrade 142.99
285 TestStartStop/group/no-preload/serial/FirstStart 107.18
286 TestStartStop/group/no-preload/serial/DeployApp 10.34
288 TestStartStop/group/embed-certs/serial/FirstStart 61.35
289 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
291 TestStartStop/group/no-preload/serial/Stop 91.78
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.48
294 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
296 TestStartStop/group/old-k8s-version/serial/Stop 92.28
297 TestStartStop/group/embed-certs/serial/DeployApp 9.34
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
299 TestStartStop/group/embed-certs/serial/Stop 91.58
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.28
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
304 TestStartStop/group/no-preload/serial/SecondStart 329.44
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/old-k8s-version/serial/SecondStart 186.74
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
308 TestStartStop/group/embed-certs/serial/SecondStart 330.88
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 335.24
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/old-k8s-version/serial/Pause 2.77
316 TestStartStop/group/newest-cni/serial/FirstStart 58.56
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
319 TestStartStop/group/newest-cni/serial/Stop 7.13
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
321 TestStartStop/group/newest-cni/serial/SecondStart 51.15
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
323 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/newest-cni/serial/Pause 2.77
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
328 TestNetworkPlugins/group/auto/Start 63.87
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
330 TestStartStop/group/no-preload/serial/Pause 2.92
331 TestNetworkPlugins/group/kindnet/Start 85.99
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.01
333 TestNetworkPlugins/group/auto/KubeletFlags 0.29
334 TestNetworkPlugins/group/auto/NetCatPod 11.4
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
336 TestNetworkPlugins/group/auto/DNS 0.19
337 TestNetworkPlugins/group/auto/Localhost 0.17
338 TestNetworkPlugins/group/auto/HairPin 0.16
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
340 TestStartStop/group/embed-certs/serial/Pause 3.53
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
342 TestNetworkPlugins/group/calico/Start 97.46
343 TestNetworkPlugins/group/custom-flannel/Start 108.09
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.85
350 TestNetworkPlugins/group/enable-default-cni/Start 99.44
351 TestNetworkPlugins/group/kindnet/DNS 0.17
352 TestNetworkPlugins/group/kindnet/Localhost 0.13
353 TestNetworkPlugins/group/kindnet/HairPin 0.14
354 TestNetworkPlugins/group/flannel/Start 128.53
355 TestNetworkPlugins/group/calico/ControllerPod 6.01
356 TestNetworkPlugins/group/calico/KubeletFlags 0.23
357 TestNetworkPlugins/group/calico/NetCatPod 10.38
358 TestNetworkPlugins/group/calico/DNS 0.22
359 TestNetworkPlugins/group/calico/Localhost 0.19
360 TestNetworkPlugins/group/calico/HairPin 0.21
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.44
363 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
364 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
365 TestNetworkPlugins/group/custom-flannel/DNS 0.24
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
371 TestNetworkPlugins/group/bridge/Start 67.66
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
374 TestNetworkPlugins/group/flannel/NetCatPod 10.28
375 TestNetworkPlugins/group/flannel/DNS 0.17
376 TestNetworkPlugins/group/flannel/Localhost 0.14
377 TestNetworkPlugins/group/flannel/HairPin 0.16
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
379 TestNetworkPlugins/group/bridge/NetCatPod 9.28
380 TestNetworkPlugins/group/bridge/DNS 0.17
381 TestNetworkPlugins/group/bridge/Localhost 0.16
382 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (22.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-599955 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-599955 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (22.541549845s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-599955
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-599955: exit status 85 (85.043749ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |          |
	|         | -p download-only-599955        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:55:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:55:13.355468  565633 out.go:296] Setting OutFile to fd 1 ...
	I0116 01:55:13.355760  565633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:13.355771  565633 out.go:309] Setting ErrFile to fd 2...
	I0116 01:55:13.355776  565633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:13.355991  565633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	W0116 01:55:13.356117  565633 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17967-558382/.minikube/config/config.json: open /home/jenkins/minikube-integration/17967-558382/.minikube/config/config.json: no such file or directory
	I0116 01:55:13.356709  565633 out.go:303] Setting JSON to true
	I0116 01:55:13.357791  565633 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9457,"bootTime":1705360657,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 01:55:13.357868  565633 start.go:138] virtualization: kvm guest
	I0116 01:55:13.360450  565633 out.go:97] [download-only-599955] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 01:55:13.362275  565633 out.go:169] MINIKUBE_LOCATION=17967
	W0116 01:55:13.360573  565633 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 01:55:13.360622  565633 notify.go:220] Checking for updates...
	I0116 01:55:13.365872  565633 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 01:55:13.367829  565633 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 01:55:13.369575  565633 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:55:13.371207  565633 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 01:55:13.374505  565633 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:55:13.374799  565633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:55:13.408850  565633 out.go:97] Using the kvm2 driver based on user configuration
	I0116 01:55:13.408891  565633 start.go:298] selected driver: kvm2
	I0116 01:55:13.408900  565633 start.go:902] validating driver "kvm2" against <nil>
	I0116 01:55:13.409287  565633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:13.409432  565633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-558382/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 01:55:13.426173  565633 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 01:55:13.426243  565633 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:55:13.426776  565633 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 01:55:13.426943  565633 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:55:13.427020  565633 cni.go:84] Creating CNI manager for ""
	I0116 01:55:13.427033  565633 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:55:13.427048  565633 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:55:13.427057  565633 start_flags.go:321] config:
	{Name:download-only-599955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-599955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:55:13.427270  565633 iso.go:125] acquiring lock: {Name:mkfcdc81fb6f1fb9928eb379c0846826cfbbc8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:13.429407  565633 out.go:97] Downloading VM boot image ...
	I0116 01:55:13.429456  565633 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17967-558382/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 01:55:21.865838  565633 out.go:97] Starting control plane node download-only-599955 in cluster download-only-599955
	I0116 01:55:21.865887  565633 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 01:55:21.960478  565633 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0116 01:55:21.960518  565633 cache.go:56] Caching tarball of preloaded images
	I0116 01:55:21.960699  565633 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0116 01:55:21.963167  565633 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 01:55:21.963208  565633 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0116 01:55:22.069649  565633 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-599955"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-599955
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-772119 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-772119 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (14.668395602s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-772119
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-772119: exit status 85 (84.624444ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-599955        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| delete  | -p download-only-599955        | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| start   | -o=json --download-only        | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-772119        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:55:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:55:36.295130  565816 out.go:296] Setting OutFile to fd 1 ...
	I0116 01:55:36.295253  565816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:36.295257  565816 out.go:309] Setting ErrFile to fd 2...
	I0116 01:55:36.295262  565816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:36.295522  565816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 01:55:36.296226  565816 out.go:303] Setting JSON to true
	I0116 01:55:36.297313  565816 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9480,"bootTime":1705360657,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 01:55:36.297399  565816 start.go:138] virtualization: kvm guest
	I0116 01:55:36.299851  565816 out.go:97] [download-only-772119] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 01:55:36.301414  565816 out.go:169] MINIKUBE_LOCATION=17967
	I0116 01:55:36.300116  565816 notify.go:220] Checking for updates...
	I0116 01:55:36.304683  565816 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 01:55:36.306340  565816 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 01:55:36.307788  565816 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:55:36.309220  565816 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 01:55:36.312022  565816 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:55:36.312348  565816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:55:36.346455  565816 out.go:97] Using the kvm2 driver based on user configuration
	I0116 01:55:36.346544  565816 start.go:298] selected driver: kvm2
	I0116 01:55:36.346563  565816 start.go:902] validating driver "kvm2" against <nil>
	I0116 01:55:36.347195  565816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:36.347335  565816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-558382/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 01:55:36.363329  565816 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 01:55:36.363415  565816 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:55:36.363994  565816 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 01:55:36.364175  565816 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:55:36.364273  565816 cni.go:84] Creating CNI manager for ""
	I0116 01:55:36.364291  565816 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:55:36.364311  565816 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:55:36.364323  565816 start_flags.go:321] config:
	{Name:download-only-772119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-772119 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:55:36.364505  565816 iso.go:125] acquiring lock: {Name:mkfcdc81fb6f1fb9928eb379c0846826cfbbc8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:36.366522  565816 out.go:97] Starting control plane node download-only-772119 in cluster download-only-772119
	I0116 01:55:36.366541  565816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 01:55:36.723988  565816 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0116 01:55:36.724036  565816 cache.go:56] Caching tarball of preloaded images
	I0116 01:55:36.724192  565816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0116 01:55:36.726713  565816 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 01:55:36.726749  565816 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0116 01:55:36.825152  565816 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-772119"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-772119
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (19.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-542475 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-542475 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (19.653031859s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (19.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-542475
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-542475: exit status 85 (654.094991ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-599955           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| delete  | -p download-only-599955           | download-only-599955 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| start   | -o=json --download-only           | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-772119           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| delete  | -p download-only-772119           | download-only-772119 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC | 16 Jan 24 01:55 UTC |
	| start   | -o=json --download-only           | download-only-542475 | jenkins | v1.32.0 | 16 Jan 24 01:55 UTC |                     |
	|         | -p download-only-542475           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:55:51
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:55:51.357833  565989 out.go:296] Setting OutFile to fd 1 ...
	I0116 01:55:51.358108  565989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:51.358118  565989 out.go:309] Setting ErrFile to fd 2...
	I0116 01:55:51.358123  565989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:55:51.358316  565989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 01:55:51.358918  565989 out.go:303] Setting JSON to true
	I0116 01:55:51.359967  565989 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9495,"bootTime":1705360657,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 01:55:51.360037  565989 start.go:138] virtualization: kvm guest
	I0116 01:55:51.362335  565989 out.go:97] [download-only-542475] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 01:55:51.363968  565989 out.go:169] MINIKUBE_LOCATION=17967
	I0116 01:55:51.362573  565989 notify.go:220] Checking for updates...
	I0116 01:55:51.366863  565989 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 01:55:51.368610  565989 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 01:55:51.370501  565989 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 01:55:51.372018  565989 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 01:55:51.374738  565989 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:55:51.375107  565989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:55:51.409583  565989 out.go:97] Using the kvm2 driver based on user configuration
	I0116 01:55:51.409623  565989 start.go:298] selected driver: kvm2
	I0116 01:55:51.409633  565989 start.go:902] validating driver "kvm2" against <nil>
	I0116 01:55:51.410157  565989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:51.410283  565989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17967-558382/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 01:55:51.426025  565989 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 01:55:51.426114  565989 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:55:51.426856  565989 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 01:55:51.427091  565989 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:55:51.427178  565989 cni.go:84] Creating CNI manager for ""
	I0116 01:55:51.427197  565989 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0116 01:55:51.427212  565989 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:55:51.427232  565989 start_flags.go:321] config:
	{Name:download-only-542475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-542475 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:55:51.427484  565989 iso.go:125] acquiring lock: {Name:mkfcdc81fb6f1fb9928eb379c0846826cfbbc8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:55:51.429391  565989 out.go:97] Starting control plane node download-only-542475 in cluster download-only-542475
	I0116 01:55:51.429418  565989 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 01:55:52.163518  565989 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0116 01:55:52.163572  565989 cache.go:56] Caching tarball of preloaded images
	I0116 01:55:52.163731  565989 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 01:55:52.165691  565989 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 01:55:52.165727  565989 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 01:55:52.263009  565989 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0116 01:56:03.865935  565989 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 01:56:03.866047  565989 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-558382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0116 01:56:04.681757  565989 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0116 01:56:04.682138  565989 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/download-only-542475/config.json ...
	I0116 01:56:04.682171  565989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/download-only-542475/config.json: {Name:mkee19247b7f620e26ca0b2117dd01e8d4473692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:56:04.682340  565989 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0116 01:56:04.682472  565989 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17967-558382/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-542475"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-542475
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-142069 --alsologtostderr --binary-mirror http://127.0.0.1:45435 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-142069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-142069
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (91.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-587788 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-587788 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m30.185263265s)
helpers_test.go:175: Cleaning up "offline-containerd-587788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-587788
--- PASS: TestOffline (91.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-874655
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-874655: exit status 85 (77.613603ms)

                                                
                                                
-- stdout --
	* Profile "addons-874655" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-874655"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-874655
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-874655: exit status 85 (76.666907ms)

                                                
                                                
-- stdout --
	* Profile "addons-874655" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-874655"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (145.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-874655 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-874655 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.863612837s)
--- PASS: TestAddons/Setup (145.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.426686ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-78bhv" [1907fb2e-d297-4c24-82d4-d7d8736b29cf] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007320254s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x6nzc" [64fc9ce0-7e9b-4533-89a0-30e3a30cc0ed] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006335208s
addons_test.go:340: (dbg) Run:  kubectl --context addons-874655 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-874655 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-874655 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.936655052s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 ip
2024/01/16 01:58:55 [DEBUG] GET http://192.168.39.252:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-874655 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-874655 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-874655 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c5bdb7f4-acda-41fe-a9f9-27746f7e6474] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c5bdb7f4-acda-41fe-a9f9-27746f7e6474] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005032222s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-874655 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.252
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 addons disable ingress-dns --alsologtostderr -v=1: (1.193328499s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 addons disable ingress --alsologtostderr -v=1: (7.863908456s)
--- PASS: TestAddons/parallel/Ingress (21.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n8nxf" [7e3327cd-a096-4545-adfd-00c9d60a4cd5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00513647s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-874655
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-874655: (5.988282879s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 28.687404ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-m4wqp" [be51a3a4-8a00-421c-88df-222a2ebded47] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005793914s
addons_test.go:415: (dbg) Run:  kubectl --context addons-874655 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 addons disable metrics-server --alsologtostderr -v=1: (1.020673528s)
--- PASS: TestAddons/parallel/MetricsServer (7.13s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.683875ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9f2w9" [068035ed-e81b-4a9b-921a-c3a8b21cdf49] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009465128s
addons_test.go:473: (dbg) Run:  kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.555813713s)
addons_test.go:478: kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.454398065s)
addons_test.go:478: kubectl --context addons-874655 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 addons disable helm-tiller --alsologtostderr -v=1: (1.051225456s)
--- PASS: TestAddons/parallel/HelmTiller (16.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (81.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 34.31316ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-874655 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-874655 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6ee3e90f-cfb4-4169-8fff-219a372095c8] Pending
helpers_test.go:344: "task-pv-pod" [6ee3e90f-cfb4-4169-8fff-219a372095c8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6ee3e90f-cfb4-4169-8fff-219a372095c8] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005793845s
addons_test.go:584: (dbg) Run:  kubectl --context addons-874655 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-874655 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-874655 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-874655 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-874655 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-874655 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-874655 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [da37ce15-d32e-4b98-b100-012fef0df689] Pending
helpers_test.go:344: "task-pv-pod-restore" [da37ce15-d32e-4b98-b100-012fef0df689] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [da37ce15-d32e-4b98-b100-012fef0df689] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.005062638s
addons_test.go:626: (dbg) Run:  kubectl --context addons-874655 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-874655 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-874655 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-874655 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.802103968s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (81.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-pgclx" [f8472c33-415d-452d-b5b7-082bce8b9830] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005462391s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-874655
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-874655 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-874655 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b3265e99-bad5-4210-8d5d-dcdba5402df4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b3265e99-bad5-4210-8d5d-dcdba5402df4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b3265e99-bad5-4210-8d5d-dcdba5402df4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005379286s
addons_test.go:891: (dbg) Run:  kubectl --context addons-874655 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 ssh "cat /opt/local-path-provisioner/pvc-5fe80822-5bac-479e-b226-db3831abc964_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-874655 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-874655 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-874655 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7xfml" [a88dbfda-64b4-4b19-b555-d3c1125242f9] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005357663s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-874655
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-mjs9x" [8e2c8c40-d08c-4401-a787-406abdc1901f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004776377s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-874655 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-874655 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-874655
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-874655: (1m32.25156098s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-874655
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-874655
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-874655
--- PASS: TestAddons/StoppedEnableDisable (92.59s)

                                                
                                    
x
+
TestCertOptions (72.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-742638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0116 02:38:38.924866  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-742638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m11.292685149s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-742638 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-742638 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-742638 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-742638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-742638
--- PASS: TestCertOptions (72.69s)

                                                
                                    
x
+
TestCertExpiration (243.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-417833 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-417833 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (55.680077524s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-417833 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-417833 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (6.37116214s)
helpers_test.go:175: Cleaning up "cert-expiration-417833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-417833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-417833: (1.09845814s)
--- PASS: TestCertExpiration (243.15s)

                                                
                                    
x
+
TestForceSystemdFlag (81.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-777599 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-777599 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m20.481895397s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-777599 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-777599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-777599
--- PASS: TestForceSystemdFlag (81.81s)

                                                
                                    
x
+
TestForceSystemdEnv (69.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-600646 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-600646 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m7.636132511s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-600646 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-600646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-600646
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-600646: (1.565711194s)
--- PASS: TestForceSystemdEnv (69.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.55s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.55s)

                                                
                                    
x
+
TestErrorSpam/setup (47.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-391273 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-391273 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-391273 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-391273 --driver=kvm2  --container-runtime=containerd: (47.762904467s)
--- PASS: TestErrorSpam/setup (47.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 stop: (1.308933075s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-391273 --log_dir /tmp/nospam-391273 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17967-558382/.minikube/files/etc/test/nested/copy/565621/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0116 02:03:38.927317  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:38.933175  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:38.943461  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:38.963891  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:39.004266  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:39.084651  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:39.244978  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:39.565612  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:40.206606  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:41.487816  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:03:44.048435  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-139041 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m1.798146674s)
--- PASS: TestFunctional/serial/StartWithProxy (61.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --alsologtostderr -v=8
E0116 02:03:49.168965  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-139041 --alsologtostderr -v=8: (5.611825787s)
functional_test.go:659: soft start took 5.612512328s for "functional-139041" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-139041 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:3.1: (1.636527586s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:3.3: (1.679498761s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 cache add registry.k8s.io/pause:latest: (1.656770819s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-139041 /tmp/TestFunctionalserialCacheCmdcacheadd_local1871517859/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache add minikube-local-cache-test:functional-139041
E0116 02:03:59.410000  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 cache add minikube-local-cache-test:functional-139041: (2.240239523s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache delete minikube-local-cache-test:functional-139041
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-139041
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (262.855874ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 cache reload: (1.503310835s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 kubectl -- --context functional-139041 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-139041 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0116 02:04:19.890363  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-139041 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.8493914s)
functional_test.go:757: restart took 40.849582347s for "functional-139041" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-139041 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 logs: (1.531877026s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 logs --file /tmp/TestFunctionalserialLogsFileCmd1158727192/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 logs --file /tmp/TestFunctionalserialLogsFileCmd1158727192/001/logs.txt: (1.49381577s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-139041 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-139041
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-139041: exit status 115 (328.454824ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.161:32497 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-139041 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 config get cpus: exit status 14 (78.36715ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 config get cpus: exit status 14 (77.161154ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-139041 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-139041 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 572908: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-139041 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (165.344571ms)

                                                
                                                
-- stdout --
	* [functional-139041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:05:20.262545  572624 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:05:20.262708  572624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:20.262722  572624 out.go:309] Setting ErrFile to fd 2...
	I0116 02:05:20.262730  572624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:20.263007  572624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 02:05:20.263581  572624 out.go:303] Setting JSON to false
	I0116 02:05:20.264944  572624 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10063,"bootTime":1705360657,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:05:20.265011  572624 start.go:138] virtualization: kvm guest
	I0116 02:05:20.267287  572624 out.go:177] * [functional-139041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:05:20.268972  572624 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:05:20.270572  572624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:05:20.268929  572624 notify.go:220] Checking for updates...
	I0116 02:05:20.273475  572624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 02:05:20.274757  572624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 02:05:20.276056  572624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:05:20.277383  572624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:05:20.279081  572624 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:05:20.279604  572624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:05:20.279656  572624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:05:20.294821  572624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0116 02:05:20.295503  572624 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:05:20.296177  572624 main.go:141] libmachine: Using API Version  1
	I0116 02:05:20.296208  572624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:05:20.296683  572624 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:05:20.296946  572624 main.go:141] libmachine: (functional-139041) Calling .DriverName
	I0116 02:05:20.297298  572624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:05:20.297770  572624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:05:20.297825  572624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:05:20.314329  572624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0116 02:05:20.314817  572624 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:05:20.315361  572624 main.go:141] libmachine: Using API Version  1
	I0116 02:05:20.315389  572624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:05:20.315845  572624 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:05:20.316180  572624 main.go:141] libmachine: (functional-139041) Calling .DriverName
	I0116 02:05:20.353560  572624 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 02:05:20.355014  572624 start.go:298] selected driver: kvm2
	I0116 02:05:20.355041  572624 start.go:902] validating driver "kvm2" against &{Name:functional-139041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-139041 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.161 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:05:20.355212  572624 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:05:20.357612  572624 out.go:177] 
	W0116 02:05:20.359237  572624 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 02:05:20.360777  572624 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-139041 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-139041 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (179.936652ms)

                                                
                                                
-- stdout --
	* [functional-139041] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:05:20.613372  572720 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:05:20.613513  572720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:20.613546  572720 out.go:309] Setting ErrFile to fd 2...
	I0116 02:05:20.613554  572720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:20.613861  572720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 02:05:20.614440  572720 out.go:303] Setting JSON to false
	I0116 02:05:20.615573  572720 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10064,"bootTime":1705360657,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:05:20.615641  572720 start.go:138] virtualization: kvm guest
	I0116 02:05:20.618166  572720 out.go:177] * [functional-139041] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 02:05:20.619825  572720 notify.go:220] Checking for updates...
	I0116 02:05:20.621493  572720 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:05:20.623116  572720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:05:20.625231  572720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 02:05:20.626817  572720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 02:05:20.628426  572720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:05:20.630080  572720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:05:20.632325  572720 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:05:20.632934  572720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:05:20.633011  572720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:05:20.654800  572720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0116 02:05:20.655379  572720 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:05:20.656041  572720 main.go:141] libmachine: Using API Version  1
	I0116 02:05:20.656065  572720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:05:20.656509  572720 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:05:20.656734  572720 main.go:141] libmachine: (functional-139041) Calling .DriverName
	I0116 02:05:20.657072  572720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:05:20.657558  572720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:05:20.657618  572720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:05:20.675712  572720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0116 02:05:20.676227  572720 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:05:20.676841  572720 main.go:141] libmachine: Using API Version  1
	I0116 02:05:20.676868  572720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:05:20.677219  572720 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:05:20.677433  572720 main.go:141] libmachine: (functional-139041) Calling .DriverName
	I0116 02:05:20.712784  572720 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0116 02:05:20.714443  572720 start.go:298] selected driver: kvm2
	I0116 02:05:20.714466  572720 start.go:902] validating driver "kvm2" against &{Name:functional-139041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-139041 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.161 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:05:20.714608  572720 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:05:20.717460  572720 out.go:177] 
	W0116 02:05:20.719241  572720 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 02:05:20.720721  572720 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-139041 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-139041 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-bln8h" [b84a7f3a-cfe5-4e3b-9cdb-09e8c80a94ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-bln8h" [b84a7f3a-cfe5-4e3b-9cdb-09e8c80a94ad] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.006484602s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.161:31421
functional_test.go:1674: http://192.168.50.161:31421: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-bln8h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.161:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.161:31421
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3ac90925-6435-497e-9d30-34493ec30558] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005517521s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-139041 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-139041 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-139041 get pvc myclaim -o=json
E0116 02:05:00.851083  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-139041 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-139041 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff5918c5-7cb0-48f7-8e53-d10698762a3d] Pending
helpers_test.go:344: "sp-pod" [ff5918c5-7cb0-48f7-8e53-d10698762a3d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff5918c5-7cb0-48f7-8e53-d10698762a3d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.018043801s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-139041 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-139041 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-139041 delete -f testdata/storage-provisioner/pod.yaml: (1.409957261s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-139041 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fda41c14-dd06-4b3f-8f0e-8a0b23e84d1b] Pending
helpers_test.go:344: "sp-pod" [fda41c14-dd06-4b3f-8f0e-8a0b23e84d1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fda41c14-dd06-4b3f-8f0e-8a0b23e84d1b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005508667s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-139041 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh -n functional-139041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cp functional-139041:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd753018038/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh -n functional-139041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh -n functional-139041 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-139041 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pgk6q" [a7408073-70e2-4024-8e39-e3ae22822856] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pgk6q" [a7408073-70e2-4024-8e39-e3ae22822856] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.012299947s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;": exit status 1 (176.405078ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;": exit status 1 (239.528089ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;": exit status 1 (178.977685ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-139041 exec mysql-859648c796-pgk6q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/565621/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /etc/test/nested/copy/565621/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/565621.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /etc/ssl/certs/565621.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/565621.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /usr/share/ca-certificates/565621.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/5656212.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /etc/ssl/certs/5656212.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/5656212.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /usr/share/ca-certificates/5656212.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-139041 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "sudo systemctl is-active docker": exit status 1 (291.03526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "sudo systemctl is-active crio": exit status 1 (270.081585ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-139041 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-139041 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gbm2l" [33ed1648-eab2-4a9c-9a19-fbab3d44b42c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gbm2l" [33ed1648-eab2-4a9c-9a19-fbab3d44b42c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.006399999s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 service list: (1.085140077s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-139041 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-139041
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-139041
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-139041 image ls --format short --alsologtostderr:
I0116 02:05:38.037991  573789 out.go:296] Setting OutFile to fd 1 ...
I0116 02:05:38.038143  573789 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.038153  573789 out.go:309] Setting ErrFile to fd 2...
I0116 02:05:38.038160  573789 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.038362  573789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
I0116 02:05:38.039119  573789 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.039337  573789 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.040149  573789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.040252  573789 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.056099  573789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
I0116 02:05:38.056610  573789 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.057379  573789 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.057403  573789 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.057856  573789 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.058070  573789 main.go:141] libmachine: (functional-139041) Calling .GetState
I0116 02:05:38.060673  573789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.060728  573789 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.076490  573789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
I0116 02:05:38.077181  573789 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.077790  573789 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.077815  573789 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.078218  573789 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.078444  573789 main.go:141] libmachine: (functional-139041) Calling .DriverName
I0116 02:05:38.078710  573789 ssh_runner.go:195] Run: systemctl --version
I0116 02:05:38.078753  573789 main.go:141] libmachine: (functional-139041) Calling .GetSSHHostname
I0116 02:05:38.082170  573789 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.082498  573789 main.go:141] libmachine: (functional-139041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:23:ab", ip: ""} in network mk-functional-139041: {Iface:virbr1 ExpiryTime:2024-01-16 03:03:02 +0000 UTC Type:0 Mac:52:54:00:46:23:ab Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:functional-139041 Clientid:01:52:54:00:46:23:ab}
I0116 02:05:38.082539  573789 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined IP address 192.168.50.161 and MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.082737  573789 main.go:141] libmachine: (functional-139041) Calling .GetSSHPort
I0116 02:05:38.082943  573789 main.go:141] libmachine: (functional-139041) Calling .GetSSHKeyPath
I0116 02:05:38.083140  573789 main.go:141] libmachine: (functional-139041) Calling .GetSSHUsername
I0116 02:05:38.083319  573789 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/functional-139041/id_rsa Username:docker}
I0116 02:05:38.178028  573789 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:05:38.234747  573789 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.234765  573789 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.235000  573789 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:38.235042  573789 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.235052  573789 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:05:38.235061  573789 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.235069  573789 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.235293  573789 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.235309  573789 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-139041 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| docker.io/library/nginx                     | latest             | sha256:a87587 | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/google-containers/addon-resizer      | functional-139041  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-139041  | sha256:9f465b | 1.01kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-139041 image ls --format table --alsologtostderr:
I0116 02:05:38.313154  573879 out.go:296] Setting OutFile to fd 1 ...
I0116 02:05:38.313289  573879 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.313299  573879 out.go:309] Setting ErrFile to fd 2...
I0116 02:05:38.313305  573879 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.313515  573879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
I0116 02:05:38.314172  573879 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.314299  573879 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.314738  573879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.314804  573879 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.330848  573879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
I0116 02:05:38.331399  573879 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.332136  573879 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.332167  573879 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.332711  573879 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.332951  573879 main.go:141] libmachine: (functional-139041) Calling .GetState
I0116 02:05:38.335071  573879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.335125  573879 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.351649  573879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
I0116 02:05:38.352098  573879 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.352575  573879 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.352622  573879 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.352999  573879 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.353136  573879 main.go:141] libmachine: (functional-139041) Calling .DriverName
I0116 02:05:38.353305  573879 ssh_runner.go:195] Run: systemctl --version
I0116 02:05:38.353326  573879 main.go:141] libmachine: (functional-139041) Calling .GetSSHHostname
I0116 02:05:38.356241  573879 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.356702  573879 main.go:141] libmachine: (functional-139041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:23:ab", ip: ""} in network mk-functional-139041: {Iface:virbr1 ExpiryTime:2024-01-16 03:03:02 +0000 UTC Type:0 Mac:52:54:00:46:23:ab Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:functional-139041 Clientid:01:52:54:00:46:23:ab}
I0116 02:05:38.356720  573879 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined IP address 192.168.50.161 and MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.356871  573879 main.go:141] libmachine: (functional-139041) Calling .GetSSHPort
I0116 02:05:38.357050  573879 main.go:141] libmachine: (functional-139041) Calling .GetSSHKeyPath
I0116 02:05:38.357244  573879 main.go:141] libmachine: (functional-139041) Calling .GetSSHUsername
I0116 02:05:38.357536  573879 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/functional-139041/id_rsa Username:docker}
I0116 02:05:38.451139  573879 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:05:38.523647  573879 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.523665  573879 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.523988  573879 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:38.523995  573879 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.524032  573879 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:05:38.524047  573879 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.524060  573879 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.524373  573879 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:38.524440  573879 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.524469  573879 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-139041 image ls --format json --alsologtostderr:
[{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags
":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a
109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-139041"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io
/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:9f465b31c4f8f4ee69f268034b7d15d98a743d356963e1a2dd38049d17c4930c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-139041"],"size":"1006"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha2
56:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-139041 image ls --format json --alsologtostderr:
I0116 02:05:38.315693  573873 out.go:296] Setting OutFile to fd 1 ...
I0116 02:05:38.315981  573873 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.315995  573873 out.go:309] Setting ErrFile to fd 2...
I0116 02:05:38.316002  573873 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.316320  573873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
I0116 02:05:38.317205  573873 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.317365  573873 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.317793  573873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.317859  573873 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.336026  573873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
I0116 02:05:38.336683  573873 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.337336  573873 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.337367  573873 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.337815  573873 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.338084  573873 main.go:141] libmachine: (functional-139041) Calling .GetState
I0116 02:05:38.340439  573873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.340482  573873 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.359003  573873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
I0116 02:05:38.359485  573873 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.360201  573873 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.360235  573873 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.360701  573873 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.360873  573873 main.go:141] libmachine: (functional-139041) Calling .DriverName
I0116 02:05:38.361103  573873 ssh_runner.go:195] Run: systemctl --version
I0116 02:05:38.361131  573873 main.go:141] libmachine: (functional-139041) Calling .GetSSHHostname
I0116 02:05:38.363725  573873 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.364116  573873 main.go:141] libmachine: (functional-139041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:23:ab", ip: ""} in network mk-functional-139041: {Iface:virbr1 ExpiryTime:2024-01-16 03:03:02 +0000 UTC Type:0 Mac:52:54:00:46:23:ab Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:functional-139041 Clientid:01:52:54:00:46:23:ab}
I0116 02:05:38.364154  573873 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined IP address 192.168.50.161 and MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.364283  573873 main.go:141] libmachine: (functional-139041) Calling .GetSSHPort
I0116 02:05:38.364503  573873 main.go:141] libmachine: (functional-139041) Calling .GetSSHKeyPath
I0116 02:05:38.364658  573873 main.go:141] libmachine: (functional-139041) Calling .GetSSHUsername
I0116 02:05:38.364821  573873 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/functional-139041/id_rsa Username:docker}
I0116 02:05:38.462407  573873 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:05:38.547516  573873 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.547532  573873 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.549780  573873 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.549808  573873 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:05:38.549832  573873 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.549870  573873 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.549894  573873 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:38.550204  573873 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.550222  573873 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-139041 image ls --format yaml --alsologtostderr:
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9f465b31c4f8f4ee69f268034b7d15d98a743d356963e1a2dd38049d17c4930c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-139041
size: "1006"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-139041
size: "10823156"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-139041 image ls --format yaml --alsologtostderr:
I0116 02:05:38.038000  573787 out.go:296] Setting OutFile to fd 1 ...
I0116 02:05:38.038142  573787 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.038153  573787 out.go:309] Setting ErrFile to fd 2...
I0116 02:05:38.038160  573787 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.039574  573787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
I0116 02:05:38.040892  573787 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.041111  573787 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.041666  573787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.041718  573787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.056081  573787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
I0116 02:05:38.056741  573787 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.057437  573787 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.057466  573787 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.057879  573787 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.058092  573787 main.go:141] libmachine: (functional-139041) Calling .GetState
I0116 02:05:38.060784  573787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.060839  573787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.076495  573787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
I0116 02:05:38.077009  573787 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.077559  573787 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.077585  573787 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.077980  573787 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.078205  573787 main.go:141] libmachine: (functional-139041) Calling .DriverName
I0116 02:05:38.078469  573787 ssh_runner.go:195] Run: systemctl --version
I0116 02:05:38.078494  573787 main.go:141] libmachine: (functional-139041) Calling .GetSSHHostname
I0116 02:05:38.082064  573787 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.082508  573787 main.go:141] libmachine: (functional-139041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:23:ab", ip: ""} in network mk-functional-139041: {Iface:virbr1 ExpiryTime:2024-01-16 03:03:02 +0000 UTC Type:0 Mac:52:54:00:46:23:ab Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:functional-139041 Clientid:01:52:54:00:46:23:ab}
I0116 02:05:38.082529  573787 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined IP address 192.168.50.161 and MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.082735  573787 main.go:141] libmachine: (functional-139041) Calling .GetSSHPort
I0116 02:05:38.082935  573787 main.go:141] libmachine: (functional-139041) Calling .GetSSHKeyPath
I0116 02:05:38.083096  573787 main.go:141] libmachine: (functional-139041) Calling .GetSSHUsername
I0116 02:05:38.083213  573787 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/functional-139041/id_rsa Username:docker}
I0116 02:05:38.178159  573787 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:05:38.233879  573787 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.233903  573787 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.234262  573787 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.234298  573787 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:05:38.234311  573787 main.go:141] libmachine: Making call to close driver server
I0116 02:05:38.234314  573787 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:38.234321  573787 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:38.234640  573787 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:38.234663  573787 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh pgrep buildkitd: exit status 1 (261.397512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image build -t localhost/my-image:functional-139041 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image build -t localhost/my-image:functional-139041 testdata/build --alsologtostderr: (4.152209244s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-139041 image build -t localhost/my-image:functional-139041 testdata/build --alsologtostderr:
I0116 02:05:38.302889  573863 out.go:296] Setting OutFile to fd 1 ...
I0116 02:05:38.303072  573863 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.303086  573863 out.go:309] Setting ErrFile to fd 2...
I0116 02:05:38.303094  573863 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:05:38.303354  573863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
I0116 02:05:38.304033  573863 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.304692  573863 config.go:182] Loaded profile config "functional-139041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0116 02:05:38.305232  573863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.305290  573863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.320749  573863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
I0116 02:05:38.321287  573863 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.321885  573863 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.321916  573863 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.322320  573863 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.322533  573863 main.go:141] libmachine: (functional-139041) Calling .GetState
I0116 02:05:38.324430  573863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0116 02:05:38.324473  573863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:05:38.340430  573863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
I0116 02:05:38.340864  573863 main.go:141] libmachine: () Calling .GetVersion
I0116 02:05:38.341427  573863 main.go:141] libmachine: Using API Version  1
I0116 02:05:38.341475  573863 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:05:38.341865  573863 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:05:38.342084  573863 main.go:141] libmachine: (functional-139041) Calling .DriverName
I0116 02:05:38.342411  573863 ssh_runner.go:195] Run: systemctl --version
I0116 02:05:38.342443  573863 main.go:141] libmachine: (functional-139041) Calling .GetSSHHostname
I0116 02:05:38.345518  573863 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.345982  573863 main.go:141] libmachine: (functional-139041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:23:ab", ip: ""} in network mk-functional-139041: {Iface:virbr1 ExpiryTime:2024-01-16 03:03:02 +0000 UTC Type:0 Mac:52:54:00:46:23:ab Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:functional-139041 Clientid:01:52:54:00:46:23:ab}
I0116 02:05:38.346019  573863 main.go:141] libmachine: (functional-139041) DBG | domain functional-139041 has defined IP address 192.168.50.161 and MAC address 52:54:00:46:23:ab in network mk-functional-139041
I0116 02:05:38.346199  573863 main.go:141] libmachine: (functional-139041) Calling .GetSSHPort
I0116 02:05:38.346396  573863 main.go:141] libmachine: (functional-139041) Calling .GetSSHKeyPath
I0116 02:05:38.346527  573863 main.go:141] libmachine: (functional-139041) Calling .GetSSHUsername
I0116 02:05:38.346647  573863 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/functional-139041/id_rsa Username:docker}
I0116 02:05:38.446564  573863 build_images.go:151] Building image from path: /tmp/build.2616017043.tar
I0116 02:05:38.446655  573863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:05:38.457924  573863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2616017043.tar
I0116 02:05:38.466388  573863 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2616017043.tar: stat -c "%s %y" /var/lib/minikube/build/build.2616017043.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2616017043.tar': No such file or directory
I0116 02:05:38.466436  573863 ssh_runner.go:362] scp /tmp/build.2616017043.tar --> /var/lib/minikube/build/build.2616017043.tar (3072 bytes)
I0116 02:05:38.492530  573863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2616017043
I0116 02:05:38.511670  573863 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2616017043 -xf /var/lib/minikube/build/build.2616017043.tar
I0116 02:05:38.531491  573863 containerd.go:379] Building image: /var/lib/minikube/build/build.2616017043
I0116 02:05:38.531569  573863 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2616017043 --local dockerfile=/var/lib/minikube/build/build.2616017043 --output type=image,name=localhost/my-image:functional-139041
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e28adaa142402f5bd5984cc081eaf4b8a2a707accd1eabdf82817e058c8d8137 0.0s done
#8 exporting config sha256:e900478d8292e4d4d9735559bb2655ec88a166a0f40132d0e43158f299b7f332 0.0s done
#8 naming to localhost/my-image:functional-139041
#8 naming to localhost/my-image:functional-139041 done
#8 DONE 0.2s
I0116 02:05:42.340517  573863 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2616017043 --local dockerfile=/var/lib/minikube/build/build.2616017043 --output type=image,name=localhost/my-image:functional-139041: (3.808900507s)
I0116 02:05:42.340655  573863 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2616017043
I0116 02:05:42.357851  573863 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2616017043.tar
I0116 02:05:42.370254  573863 build_images.go:207] Built localhost/my-image:functional-139041 from /tmp/build.2616017043.tar
I0116 02:05:42.370305  573863 build_images.go:123] succeeded building to: functional-139041
I0116 02:05:42.370312  573863 build_images.go:124] failed building to: 
I0116 02:05:42.370344  573863 main.go:141] libmachine: Making call to close driver server
I0116 02:05:42.370360  573863 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:42.370699  573863 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:42.370721  573863 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:05:42.370731  573863 main.go:141] libmachine: Making call to close driver server
I0116 02:05:42.370723  573863 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:42.370740  573863 main.go:141] libmachine: (functional-139041) Calling .Close
I0116 02:05:42.371005  573863 main.go:141] libmachine: (functional-139041) DBG | Closing plugin on server side
I0116 02:05:42.371102  573863 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:05:42.371124  573863 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.989719306s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-139041
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service list -o json
functional_test.go:1493: Took "509.862333ms" to run "out/minikube-linux-amd64 -p functional-139041 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.161:31523
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.161:31523
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr: (5.0023104s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "289.110764ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.828926ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdany-port651948436/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705370719594491208" to /tmp/TestFunctionalparallelMountCmdany-port651948436/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705370719594491208" to /tmp/TestFunctionalparallelMountCmdany-port651948436/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705370719594491208" to /tmp/TestFunctionalparallelMountCmdany-port651948436/001/test-1705370719594491208
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.674889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 02:05 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 02:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 02:05 test-1705370719594491208
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh cat /mount-9p/test-1705370719594491208
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-139041 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2d29959f-180f-485b-b8ea-5ddf1f018d78] Pending
helpers_test.go:344: "busybox-mount" [2d29959f-180f-485b-b8ea-5ddf1f018d78] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2d29959f-180f-485b-b8ea-5ddf1f018d78] Running
helpers_test.go:344: "busybox-mount" [2d29959f-180f-485b-b8ea-5ddf1f018d78] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2d29959f-180f-485b-b8ea-5ddf1f018d78] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006850765s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-139041 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdany-port651948436/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "247.005832ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "65.808896ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr: (3.105729065s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.789955125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-139041
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image load --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr: (4.40028391s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdspecific-port3310959997/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.058833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdspecific-port3310959997/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "sudo umount -f /mount-9p": exit status 1 (306.390532ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-139041 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdspecific-port3310959997/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T" /mount1: exit status 1 (370.141623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-139041 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-139041 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875941393/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image save gcr.io/google-containers/addon-resizer:functional-139041 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image save gcr.io/google-containers/addon-resizer:functional-139041 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.574400275s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image rm gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
2024/01/16 02:05:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.306766897s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-139041
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-139041 image save --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-139041 image save --daemon gcr.io/google-containers/addon-resizer:functional-139041 --alsologtostderr: (1.107591863s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-139041
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-139041
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-139041
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-139041
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-067010 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0116 02:06:22.772219  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-067010 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m22.843255754s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons enable ingress --alsologtostderr -v=5: (11.445494887s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (43.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-067010 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-067010 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.502912156s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-067010 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-067010 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d871fc38-d0c2-4a30-ae64-d45e056bc5f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d871fc38-d0c2-4a30-ae64-d45e056bc5f5] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.004147307s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-067010 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.222
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons disable ingress-dns --alsologtostderr -v=1: (13.168975392s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067010 addons disable ingress --alsologtostderr -v=1: (7.577429143s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (43.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-499003 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0116 02:08:38.925551  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-499003 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m2.850517232s)
--- PASS: TestJSONOutput/start/Command (62.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-499003 --output=json --user=testUser
E0116 02:09:06.613123  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-499003 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-499003 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-499003 --output=json --user=testUser: (6.454192168s)
--- PASS: TestJSONOutput/stop/Command (6.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-307278 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-307278 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.495455ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7111089-666a-4bc5-abab-9a5af99e31f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-307278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"057d1e44-a33d-4cd5-8693-1c950872b05a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"86904aa5-d6d4-405f-aca0-32092c95a249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7c986ba-795f-47b7-886f-4a8502d14121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig"}}
	{"specversion":"1.0","id":"51d9f442-9f12-43a9-90ee-00fcc6abdd96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube"}}
	{"specversion":"1.0","id":"0c7d01a0-a019-4e8e-b5d1-f4a506f1e01f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d6c6b8c-aa3d-4e93-a018-6e83865938f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1dabbd09-a805-4a17-8466-22495a1d021c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-307278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-307278
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-050809 --driver=kvm2  --container-runtime=containerd
E0116 02:09:53.963925  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:53.969238  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:53.979615  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:54.000053  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:54.040401  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:54.120873  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:54.281390  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:54.602047  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:55.243092  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:56.523728  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:09:59.084753  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-050809 --driver=kvm2  --container-runtime=containerd: (47.027167941s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-053634 --driver=kvm2  --container-runtime=containerd
E0116 02:10:04.205355  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:10:14.445552  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:10:34.925853  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-053634 --driver=kvm2  --container-runtime=containerd: (46.013426367s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-050809
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-053634
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-053634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-053634
helpers_test.go:175: Cleaning up "first-050809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-050809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-050809: (1.031005427s)
--- PASS: TestMinikubeProfile (95.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-215887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0116 02:11:15.886107  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-215887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.517030288s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-215887 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-215887 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-233048 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-233048 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.187570016s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-215887 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-233048
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-233048: (1.101640029s)
--- PASS: TestMountStart/serial/Stop (1.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-233048
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-233048: (21.734555728s)
--- PASS: TestMountStart/serial/RestartStopped (22.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-233048 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020890 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0116 02:12:18.898450  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:18.903818  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:18.914174  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:18.934559  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:18.974912  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:19.055327  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:19.216165  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:19.536868  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:20.177714  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:21.458194  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:24.019030  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:29.139624  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:37.807970  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:12:39.380445  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:12:59.861035  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:13:38.924911  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:13:40.821374  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020890 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m4.277301473s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-020890 -- rollout status deployment/busybox: (3.528900322s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-d5b5j -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-stdpv -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-d5b5j -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-stdpv -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-d5b5j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-stdpv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-d5b5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-d5b5j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-stdpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020890 -- exec busybox-5bc68d56bd-stdpv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-020890 -v 3 --alsologtostderr
E0116 02:14:53.964067  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:15:02.742809  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-020890 -v 3 --alsologtostderr: (43.545441064s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-020890 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp testdata/cp-test.txt multinode-020890:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1564519077/001/cp-test_multinode-020890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890:/home/docker/cp-test.txt multinode-020890-m02:/home/docker/cp-test_multinode-020890_multinode-020890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test_multinode-020890_multinode-020890-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890:/home/docker/cp-test.txt multinode-020890-m03:/home/docker/cp-test_multinode-020890_multinode-020890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test_multinode-020890_multinode-020890-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp testdata/cp-test.txt multinode-020890-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1564519077/001/cp-test_multinode-020890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m02:/home/docker/cp-test.txt multinode-020890:/home/docker/cp-test_multinode-020890-m02_multinode-020890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test_multinode-020890-m02_multinode-020890.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m02:/home/docker/cp-test.txt multinode-020890-m03:/home/docker/cp-test_multinode-020890-m02_multinode-020890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test_multinode-020890-m02_multinode-020890-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp testdata/cp-test.txt multinode-020890-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1564519077/001/cp-test_multinode-020890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m03:/home/docker/cp-test.txt multinode-020890:/home/docker/cp-test_multinode-020890-m03_multinode-020890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890 "sudo cat /home/docker/cp-test_multinode-020890-m03_multinode-020890.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 cp multinode-020890-m03:/home/docker/cp-test.txt multinode-020890-m02:/home/docker/cp-test_multinode-020890-m03_multinode-020890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 ssh -n multinode-020890-m02 "sudo cat /home/docker/cp-test_multinode-020890-m03_multinode-020890-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-020890 node stop m03: (1.277672946s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020890 status: exit status 7 (473.292191ms)

                                                
                                                
-- stdout --
	multinode-020890
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-020890-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-020890-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
E0116 02:15:21.648701  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr: exit status 7 (479.662461ms)

                                                
                                                
-- stdout --
	multinode-020890
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-020890-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-020890-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:15:21.358375  580204 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:15:21.358505  580204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:15:21.358515  580204 out.go:309] Setting ErrFile to fd 2...
	I0116 02:15:21.358520  580204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:15:21.358733  580204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 02:15:21.358929  580204 out.go:303] Setting JSON to false
	I0116 02:15:21.358978  580204 mustload.go:65] Loading cluster: multinode-020890
	I0116 02:15:21.359084  580204 notify.go:220] Checking for updates...
	I0116 02:15:21.359641  580204 config.go:182] Loaded profile config "multinode-020890": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:15:21.359665  580204 status.go:255] checking status of multinode-020890 ...
	I0116 02:15:21.360302  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.360363  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.380289  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0116 02:15:21.380749  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.381359  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.381386  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.381826  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.382111  580204 main.go:141] libmachine: (multinode-020890) Calling .GetState
	I0116 02:15:21.383785  580204 status.go:330] multinode-020890 host status = "Running" (err=<nil>)
	I0116 02:15:21.383825  580204 host.go:66] Checking if "multinode-020890" exists ...
	I0116 02:15:21.384300  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.384359  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.400144  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0116 02:15:21.400671  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.401160  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.401208  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.401717  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.401965  580204 main.go:141] libmachine: (multinode-020890) Calling .GetIP
	I0116 02:15:21.405397  580204 main.go:141] libmachine: (multinode-020890) DBG | domain multinode-020890 has defined MAC address 52:54:00:a1:e0:6f in network mk-multinode-020890
	I0116 02:15:21.406109  580204 main.go:141] libmachine: (multinode-020890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:e0:6f", ip: ""} in network mk-multinode-020890: {Iface:virbr1 ExpiryTime:2024-01-16 03:12:31 +0000 UTC Type:0 Mac:52:54:00:a1:e0:6f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-020890 Clientid:01:52:54:00:a1:e0:6f}
	I0116 02:15:21.406144  580204 host.go:66] Checking if "multinode-020890" exists ...
	I0116 02:15:21.406226  580204 main.go:141] libmachine: (multinode-020890) DBG | domain multinode-020890 has defined IP address 192.168.39.165 and MAC address 52:54:00:a1:e0:6f in network mk-multinode-020890
	I0116 02:15:21.406747  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.406824  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.422837  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0116 02:15:21.423318  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.423810  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.423838  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.424190  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.424406  580204 main.go:141] libmachine: (multinode-020890) Calling .DriverName
	I0116 02:15:21.424626  580204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:15:21.424664  580204 main.go:141] libmachine: (multinode-020890) Calling .GetSSHHostname
	I0116 02:15:21.427975  580204 main.go:141] libmachine: (multinode-020890) DBG | domain multinode-020890 has defined MAC address 52:54:00:a1:e0:6f in network mk-multinode-020890
	I0116 02:15:21.428662  580204 main.go:141] libmachine: (multinode-020890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:e0:6f", ip: ""} in network mk-multinode-020890: {Iface:virbr1 ExpiryTime:2024-01-16 03:12:31 +0000 UTC Type:0 Mac:52:54:00:a1:e0:6f Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-020890 Clientid:01:52:54:00:a1:e0:6f}
	I0116 02:15:21.428700  580204 main.go:141] libmachine: (multinode-020890) DBG | domain multinode-020890 has defined IP address 192.168.39.165 and MAC address 52:54:00:a1:e0:6f in network mk-multinode-020890
	I0116 02:15:21.428876  580204 main.go:141] libmachine: (multinode-020890) Calling .GetSSHPort
	I0116 02:15:21.429066  580204 main.go:141] libmachine: (multinode-020890) Calling .GetSSHKeyPath
	I0116 02:15:21.429274  580204 main.go:141] libmachine: (multinode-020890) Calling .GetSSHUsername
	I0116 02:15:21.429432  580204 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/multinode-020890/id_rsa Username:docker}
	I0116 02:15:21.524096  580204 ssh_runner.go:195] Run: systemctl --version
	I0116 02:15:21.530420  580204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:15:21.545080  580204 kubeconfig.go:92] found "multinode-020890" server: "https://192.168.39.165:8443"
	I0116 02:15:21.545120  580204 api_server.go:166] Checking apiserver status ...
	I0116 02:15:21.545164  580204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:15:21.559292  580204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	I0116 02:15:21.570736  580204 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podb45ed8624fe3c0b833f93dc462ffb1c3/82bf495aaaaa5f50476740bf390e90d279df3a95dceecd27364bdec9a3dd8b5d"
	I0116 02:15:21.570804  580204 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb45ed8624fe3c0b833f93dc462ffb1c3/82bf495aaaaa5f50476740bf390e90d279df3a95dceecd27364bdec9a3dd8b5d/freezer.state
	I0116 02:15:21.581100  580204 api_server.go:204] freezer state: "THAWED"
	I0116 02:15:21.581138  580204 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0116 02:15:21.586433  580204 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0116 02:15:21.586462  580204 status.go:421] multinode-020890 apiserver status = Running (err=<nil>)
	I0116 02:15:21.586471  580204 status.go:257] multinode-020890 status: &{Name:multinode-020890 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:15:21.586490  580204 status.go:255] checking status of multinode-020890-m02 ...
	I0116 02:15:21.586816  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.586854  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.603352  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0116 02:15:21.603823  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.604294  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.604317  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.604663  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.604859  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetState
	I0116 02:15:21.606472  580204 status.go:330] multinode-020890-m02 host status = "Running" (err=<nil>)
	I0116 02:15:21.606509  580204 host.go:66] Checking if "multinode-020890-m02" exists ...
	I0116 02:15:21.606854  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.606894  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.622173  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37571
	I0116 02:15:21.622580  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.623061  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.623086  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.623479  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.623680  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetIP
	I0116 02:15:21.626668  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | domain multinode-020890-m02 has defined MAC address 52:54:00:29:70:5a in network mk-multinode-020890
	I0116 02:15:21.627221  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:70:5a", ip: ""} in network mk-multinode-020890: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:36 +0000 UTC Type:0 Mac:52:54:00:29:70:5a Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-020890-m02 Clientid:01:52:54:00:29:70:5a}
	I0116 02:15:21.627253  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | domain multinode-020890-m02 has defined IP address 192.168.39.76 and MAC address 52:54:00:29:70:5a in network mk-multinode-020890
	I0116 02:15:21.627383  580204 host.go:66] Checking if "multinode-020890-m02" exists ...
	I0116 02:15:21.627699  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.627749  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.644335  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0116 02:15:21.644986  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.645482  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.645504  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.645879  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.646166  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .DriverName
	I0116 02:15:21.646413  580204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:15:21.646446  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetSSHHostname
	I0116 02:15:21.649557  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | domain multinode-020890-m02 has defined MAC address 52:54:00:29:70:5a in network mk-multinode-020890
	I0116 02:15:21.650150  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:70:5a", ip: ""} in network mk-multinode-020890: {Iface:virbr1 ExpiryTime:2024-01-16 03:13:36 +0000 UTC Type:0 Mac:52:54:00:29:70:5a Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:multinode-020890-m02 Clientid:01:52:54:00:29:70:5a}
	I0116 02:15:21.650191  580204 main.go:141] libmachine: (multinode-020890-m02) DBG | domain multinode-020890-m02 has defined IP address 192.168.39.76 and MAC address 52:54:00:29:70:5a in network mk-multinode-020890
	I0116 02:15:21.650347  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetSSHPort
	I0116 02:15:21.650547  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetSSHKeyPath
	I0116 02:15:21.650737  580204 main.go:141] libmachine: (multinode-020890-m02) Calling .GetSSHUsername
	I0116 02:15:21.650883  580204 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17967-558382/.minikube/machines/multinode-020890-m02/id_rsa Username:docker}
	I0116 02:15:21.738940  580204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:15:21.753461  580204 status.go:257] multinode-020890-m02 status: &{Name:multinode-020890-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:15:21.753500  580204 status.go:255] checking status of multinode-020890-m03 ...
	I0116 02:15:21.753928  580204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:15:21.753979  580204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:15:21.770410  580204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I0116 02:15:21.770988  580204 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:15:21.771480  580204 main.go:141] libmachine: Using API Version  1
	I0116 02:15:21.771501  580204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:15:21.771893  580204 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:15:21.772158  580204 main.go:141] libmachine: (multinode-020890-m03) Calling .GetState
	I0116 02:15:21.773801  580204 status.go:330] multinode-020890-m03 host status = "Stopped" (err=<nil>)
	I0116 02:15:21.773819  580204 status.go:343] host is not running, skipping remaining checks
	I0116 02:15:21.773824  580204 status.go:257] multinode-020890-m03 status: &{Name:multinode-020890-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-020890 node start m03 --alsologtostderr: (26.233714385s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020890
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-020890
E0116 02:17:18.901072  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:17:46.583673  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:18:38.925373  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-020890: (3m4.598392542s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020890 --wait=true -v=8 --alsologtostderr
E0116 02:19:53.964632  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:20:01.973858  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020890 --wait=true -v=8 --alsologtostderr: (2m5.514010365s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020890
--- PASS: TestMultiNode/serial/RestartKeepsNodes (310.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-020890 node delete m03: (1.240345721s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 stop
E0116 02:22:18.900991  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:23:38.924734  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-020890 stop: (3m2.7768228s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020890 status: exit status 7 (108.338959ms)

                                                
                                                
-- stdout --
	multinode-020890
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-020890-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr: exit status 7 (105.918528ms)

                                                
                                                
-- stdout --
	multinode-020890
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-020890-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:24:03.721934  582739 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:24:03.722099  582739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:24:03.722114  582739 out.go:309] Setting ErrFile to fd 2...
	I0116 02:24:03.722122  582739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:24:03.722336  582739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 02:24:03.722526  582739 out.go:303] Setting JSON to false
	I0116 02:24:03.722570  582739 mustload.go:65] Loading cluster: multinode-020890
	I0116 02:24:03.722720  582739 notify.go:220] Checking for updates...
	I0116 02:24:03.723149  582739 config.go:182] Loaded profile config "multinode-020890": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0116 02:24:03.723174  582739 status.go:255] checking status of multinode-020890 ...
	I0116 02:24:03.723756  582739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:24:03.723843  582739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:24:03.742331  582739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0116 02:24:03.742887  582739 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:24:03.743744  582739 main.go:141] libmachine: Using API Version  1
	I0116 02:24:03.743776  582739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:24:03.744278  582739 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:24:03.744526  582739 main.go:141] libmachine: (multinode-020890) Calling .GetState
	I0116 02:24:03.746476  582739 status.go:330] multinode-020890 host status = "Stopped" (err=<nil>)
	I0116 02:24:03.746495  582739 status.go:343] host is not running, skipping remaining checks
	I0116 02:24:03.746502  582739 status.go:257] multinode-020890 status: &{Name:multinode-020890 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:24:03.746540  582739 status.go:255] checking status of multinode-020890-m02 ...
	I0116 02:24:03.746861  582739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0116 02:24:03.746900  582739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:24:03.761578  582739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0116 02:24:03.762340  582739 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:24:03.763558  582739 main.go:141] libmachine: Using API Version  1
	I0116 02:24:03.763589  582739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:24:03.764611  582739 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:24:03.764863  582739 main.go:141] libmachine: (multinode-020890-m02) Calling .GetState
	I0116 02:24:03.766588  582739 status.go:330] multinode-020890-m02 host status = "Stopped" (err=<nil>)
	I0116 02:24:03.766606  582739 status.go:343] host is not running, skipping remaining checks
	I0116 02:24:03.766611  582739 status.go:257] multinode-020890-m02 status: &{Name:multinode-020890-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020890 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0116 02:24:53.964978  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020890 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m28.54679746s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020890 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020890
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020890-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-020890-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (86.339582ms)

                                                
                                                
-- stdout --
	* [multinode-020890-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-020890-m02' is duplicated with machine name 'multinode-020890-m02' in profile 'multinode-020890'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020890-m03 --driver=kvm2  --container-runtime=containerd
E0116 02:26:17.009382  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020890-m03 --driver=kvm2  --container-runtime=containerd: (53.952294566s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-020890
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-020890: exit status 80 (244.024959ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-020890
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-020890-m03 already exists in multinode-020890-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-020890-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.18s)

                                                
                                    
x
+
TestPreload (351.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-898094 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0116 02:27:18.900379  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
E0116 02:28:38.925357  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:28:41.944579  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-898094 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m16.468554996s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-898094 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-898094 image pull gcr.io/k8s-minikube/busybox: (2.387585223s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-898094
E0116 02:29:53.964981  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-898094: (1m31.456071582s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-898094 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0116 02:32:18.899036  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-898094 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m0.198440929s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-898094 image list
helpers_test.go:175: Cleaning up "test-preload-898094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-898094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-898094: (1.096557414s)
--- PASS: TestPreload (351.85s)

                                                
                                    
x
+
TestScheduledStopUnix (120.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-750980 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-750980 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.260770527s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-750980 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-750980 -n scheduled-stop-750980
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-750980 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-750980 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-750980 -n scheduled-stop-750980
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-750980
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-750980 --schedule 15s
E0116 02:33:38.924529  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-750980
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-750980: exit status 7 (87.452546ms)

                                                
                                                
-- stdout --
	scheduled-stop-750980
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-750980 -n scheduled-stop-750980
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-750980 -n scheduled-stop-750980: exit status 7 (88.467832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-750980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-750980
--- PASS: TestScheduledStopUnix (120.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3537181297 start -p running-upgrade-152215 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0116 02:34:53.964002  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3537181297 start -p running-upgrade-152215 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m13.587891176s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-152215 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0116 02:36:41.974261  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-152215 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m12.92236576s)
helpers_test.go:175: Cleaning up "running-upgrade-152215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-152215
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-152215: (1.21661563s)
--- PASS: TestRunningBinaryUpgrade (210.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (205.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m22.162842717s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-675422
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-675422: (2.131407085s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-675422 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-675422 status --format={{.Host}}: exit status 7 (98.031893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0116 02:37:18.899034  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.361756959s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-675422 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (106.031363ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-675422] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-675422
	    minikube start -p kubernetes-upgrade-675422 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6754222 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-675422 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-675422 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.816995363s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-675422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-675422
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-675422: (1.252451089s)
--- PASS: TestKubernetesUpgrade (205.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (108.107147ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-657748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (72.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-711385 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-711385 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m12.446534728s)
--- PASS: TestPause/serial/Start (72.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657748 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657748 --driver=kvm2  --container-runtime=containerd: (1m39.485505413s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657748 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-711385 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-711385 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (34.170044683s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (49.055474574s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657748 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-657748 status -o json: exit status 2 (257.365233ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-657748","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-657748
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-657748: (1.039525644s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-711385 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-711385 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-711385 --output=json --layout=cluster: exit status 2 (299.540347ms)

                                                
                                                
-- stdout --
	{"Name":"pause-711385","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-711385","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-711385 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-711385 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-711385 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-711385 --alsologtostderr -v=5: (1.438943241s)
--- PASS: TestPause/serial/DeletePaused (1.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657748 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (32.16153118s)
--- PASS: TestNoKubernetes/serial/Start (32.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.489318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (22.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (19.10956255s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.227452331s)
--- PASS: TestNoKubernetes/serial/ProfileList (22.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-707497 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-707497 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (143.58277ms)

                                                
                                                
-- stdout --
	* [false-707497] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:37:27.390576  589505 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:37:27.390742  589505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:37:27.390754  589505 out.go:309] Setting ErrFile to fd 2...
	I0116 02:37:27.390768  589505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:37:27.391058  589505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-558382/.minikube/bin
	I0116 02:37:27.391917  589505 out.go:303] Setting JSON to false
	I0116 02:37:27.393349  589505 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11991,"bootTime":1705360657,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:37:27.393438  589505 start.go:138] virtualization: kvm guest
	I0116 02:37:27.395911  589505 out.go:177] * [false-707497] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:37:27.398137  589505 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:37:27.398180  589505 notify.go:220] Checking for updates...
	I0116 02:37:27.399666  589505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:37:27.401176  589505 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-558382/kubeconfig
	I0116 02:37:27.402578  589505 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-558382/.minikube
	I0116 02:37:27.403985  589505 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:37:27.405395  589505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:37:27.407751  589505 config.go:182] Loaded profile config "NoKubernetes-657748": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0116 02:37:27.407939  589505 config.go:182] Loaded profile config "kubernetes-upgrade-675422": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0116 02:37:27.408088  589505 config.go:182] Loaded profile config "running-upgrade-152215": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0116 02:37:27.408266  589505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:37:27.448228  589505 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:37:27.449840  589505 start.go:298] selected driver: kvm2
	I0116 02:37:27.449864  589505 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:37:27.449889  589505 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:37:27.452528  589505 out.go:177] 
	W0116 02:37:27.453964  589505 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0116 02:37:27.455284  589505 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-707497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:37:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.58:8443
name: running-upgrade-152215
contexts:
- context:
cluster: running-upgrade-152215
user: running-upgrade-152215
name: running-upgrade-152215
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-152215
user:
client-certificate: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.crt
client-key: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-707497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-707497"

                                                
                                                
----------------------- debugLogs end: false-707497 [took: 3.974162657s] --------------------------------
helpers_test.go:175: Cleaning up "false-707497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-707497
--- PASS: TestNetworkPlugins/group/false (4.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-657748
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-657748: (1.485996613s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657748 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657748 --driver=kvm2  --container-runtime=containerd: (43.813610798s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.133535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-778382 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-778382 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m33.027099383s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4063001368 start -p stopped-upgrade-587286 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4063001368 start -p stopped-upgrade-587286 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m21.799572512s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4063001368 -p stopped-upgrade-587286 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4063001368 -p stopped-upgrade-587286 stop: (1.498593046s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-587286 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-587286 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.695976446s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-353861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 02:39:53.964354  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-353861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m47.178655875s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-353861 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [52fdbf9c-c130-4f37-9301-0612a915ee26] Pending
helpers_test.go:344: "busybox" [52fdbf9c-c130-4f37-9301-0612a915ee26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [52fdbf9c-c130-4f37-9301-0612a915ee26] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004622909s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-353861 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-829892 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-829892 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m1.353658898s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-353861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-353861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063224009s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-353861 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-587286
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-353861 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-353861 --alsologtostderr -v=3: (1m31.782337632s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-944993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-944993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m16.476324753s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-778382 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1085c4b-e4aa-4fcd-97ca-bb7a222b87ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1085c4b-e4aa-4fcd-97ca-bb7a222b87ba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005338238s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-778382 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-778382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-778382 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-778382 --alsologtostderr -v=3
E0116 02:42:18.899251  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-778382 --alsologtostderr -v=3: (1m32.282101335s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-829892 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7045a49a-b9d4-49fa-a1a8-6fc9b841a2ee] Pending
helpers_test.go:344: "busybox" [7045a49a-b9d4-49fa-a1a8-6fc9b841a2ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7045a49a-b9d4-49fa-a1a8-6fc9b841a2ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004782968s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-829892 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-829892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-829892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103786938s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-829892 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-829892 --alsologtostderr -v=3
E0116 02:42:57.010110  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-829892 --alsologtostderr -v=3: (1m31.583122097s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-944993 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [43a9b5d0-d4b9-47db-b4fd-b8c7624fb257] Pending
helpers_test.go:344: "busybox" [43a9b5d0-d4b9-47db-b4fd-b8c7624fb257] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [43a9b5d0-d4b9-47db-b4fd-b8c7624fb257] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004106376s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-944993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-944993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-944993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068127944s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-944993 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-944993 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-944993 --alsologtostderr -v=3: (1m32.277464766s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-353861 -n no-preload-353861
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-353861 -n no-preload-353861: exit status 7 (88.349407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-353861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-353861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-353861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m28.947760407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-353861 -n no-preload-353861
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-778382 -n old-k8s-version-778382
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-778382 -n old-k8s-version-778382: exit status 7 (86.937439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-778382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (186.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-778382 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0116 02:43:38.925187  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-778382 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (3m6.41456964s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-778382 -n old-k8s-version-778382
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (186.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829892 -n embed-certs-829892
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829892 -n embed-certs-829892: exit status 7 (86.906293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-829892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (330.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-829892 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-829892 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m30.279850145s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-829892 -n embed-certs-829892
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (330.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993: exit status 7 (108.556743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-944993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-944993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0116 02:44:53.964394  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
E0116 02:45:21.945856  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-944993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m34.706609906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-p24tn" [1ee6790b-e2a5-477a-8ee1-91fcfccb86f1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005987496s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-p24tn" [1ee6790b-e2a5-477a-8ee1-91fcfccb86f1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004837445s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-778382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-778382 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-778382 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-778382 -n old-k8s-version-778382
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-778382 -n old-k8s-version-778382: exit status 2 (290.799318ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-778382 -n old-k8s-version-778382
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-778382 -n old-k8s-version-778382: exit status 2 (290.957672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-778382 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-778382 -n old-k8s-version-778382
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-778382 -n old-k8s-version-778382
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-660084 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 02:47:18.899302  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-660084 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (58.560655348s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-660084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-660084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.345362902s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-660084 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-660084 --alsologtostderr -v=3: (7.132622668s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-660084 -n newest-cni-660084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-660084 -n newest-cni-660084: exit status 7 (98.103878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-660084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-660084 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0116 02:48:38.924680  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-660084 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (50.814899247s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-660084 -n newest-cni-660084
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7wpm" [a90b71af-4a77-470c-b4e4-cb39db7197c1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7wpm" [a90b71af-4a77-470c-b4e4-cb39db7197c1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006426193s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-660084 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-660084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-660084 -n newest-cni-660084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-660084 -n newest-cni-660084: exit status 2 (295.22193ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-660084 -n newest-cni-660084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-660084 -n newest-cni-660084: exit status 2 (296.527291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-660084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-660084 -n newest-cni-660084
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-660084 -n newest-cni-660084
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7wpm" [a90b71af-4a77-470c-b4e4-cb39db7197c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004919976s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-353861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m3.866728145s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-353861 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-353861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-353861 -n no-preload-353861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-353861 -n no-preload-353861: exit status 2 (299.89724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-353861 -n no-preload-353861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-353861 -n no-preload-353861: exit status 2 (299.485184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-353861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-353861 -n no-preload-353861
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-353861 -n no-preload-353861
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m25.990269653s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-crjqn" [3ecf800a-2fc6-4696-ac71-e9ee1b7d90c6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0116 02:49:53.964014  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/functional-139041/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-crjqn" [3ecf800a-2fc6-4696-ac71-e9ee1b7d90c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.005314902s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nqzss" [0423b40a-1de1-4557-857e-7fd6d7be4d66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nqzss" [0423b40a-1de1-4557-857e-7fd6d7be4d66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005864684s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-crjqn" [3ecf800a-2fc6-4696-ac71-e9ee1b7d90c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005396915s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-829892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-829892 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-829892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829892 -n embed-certs-829892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829892 -n embed-certs-829892: exit status 2 (319.215433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-829892 -n embed-certs-829892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-829892 -n embed-certs-829892: exit status 2 (436.27585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-829892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-829892 -n embed-certs-829892
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-829892 -n embed-certs-829892
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-92ppf" [e131478e-bb5f-4590-a300-fa066351e91f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-92ppf" [e131478e-bb5f-4590-a300-fa066351e91f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005543394s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m37.461663268s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (108.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m48.08643653s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (108.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sc6xc" [d2720f6b-133d-4067-aea2-6b7b665bea04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004420081s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-92ppf" [e131478e-bb5f-4590-a300-fa066351e91f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005277915s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-944993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p895s" [68c19cfd-cb23-4771-9d33-5b2a028dbf8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p895s" [68c19cfd-cb23-4771-9d33-5b2a028dbf8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003985951s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-944993 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-944993 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993: exit status 2 (293.939876ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993: exit status 2 (288.504571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-944993 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-944993 -n default-k8s-diff-port-944993
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)
E0116 02:52:55.165208  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m39.438735113s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (128.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0116 02:51:33.239760  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.245090  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.255431  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.276172  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.316568  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.397555  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.558116  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:33.879074  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:34.520207  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:35.800464  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:38.360951  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:43.482140  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:52.927607  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:52.933007  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:52.943411  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:52.963782  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:53.004868  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:53.086099  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:53.246790  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:53.567486  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:53.723226  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:51:54.208262  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:55.488831  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
E0116 02:51:58.049927  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m8.529648435s)
--- PASS: TestNetworkPlugins/group/flannel/Start (128.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wxk7b" [527c8831-930e-4adb-bae4-ab5a9c08fba2] Running
E0116 02:52:03.170593  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006808055s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2hq27" [d55eb33b-a53d-46b9-9f70-186873c1e5ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 02:52:13.411768  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2hq27" [d55eb33b-a53d-46b9-9f70-186873c1e5ae] Running
E0116 02:52:14.204063  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/no-preload-353861/client.crt: no such file or directory
E0116 02:52:18.898337  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/ingress-addon-legacy-067010/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.102264742s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6ptxq" [f70f74e3-3b81-4d32-8d4e-7933c80bf0ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6ptxq" [f70f74e3-3b81-4d32-8d4e-7933c80bf0ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004356296s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5nvw2" [726e57e0-4723-4349-ad8c-003ba0020265] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5nvw2" [726e57e0-4723-4349-ad8c-003ba0020265] Running
E0116 02:52:33.892531  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/old-k8s-version-778382/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005698671s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-707497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m7.657914829s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7zptc" [2e42a2e8-d1ca-41e8-897d-c23ca8008bf8] Running
E0116 02:53:21.975425  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/addons-874655/client.crt: no such file or directory
E0116 02:53:23.627422  565621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/default-k8s-diff-port-944993/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005344038s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-brrrj" [dc82c225-eae6-496a-b716-49c688ad3d95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-brrrj" [dc82c225-eae6-496a-b716-49c688ad3d95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00483731s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-707497 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-707497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22zt2" [c1218724-9280-4d8e-91a2-d7cf6dc96e7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-22zt2" [c1218724-9280-4d8e-91a2-d7cf6dc96e7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006021456s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-707497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-707497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
252 TestStartStop/group/disable-driver-mounts 0.17
266 TestNetworkPlugins/group/kubenet 4.03
276 TestNetworkPlugins/group/cilium 4.18
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-061614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-061614
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-707497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:37:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.58:8443
name: running-upgrade-152215
contexts:
- context:
cluster: running-upgrade-152215
user: running-upgrade-152215
name: running-upgrade-152215
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-152215
user:
client-certificate: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.crt
client-key: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-707497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-707497"

                                                
                                                
----------------------- debugLogs end: kubenet-707497 [took: 3.85455979s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-707497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-707497
--- SKIP: TestNetworkPlugins/group/kubenet (4.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-707497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-707497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-558382/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 02:37:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.58:8443
name: running-upgrade-152215
contexts:
- context:
cluster: running-upgrade-152215
user: running-upgrade-152215
name: running-upgrade-152215
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-152215
user:
client-certificate: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.crt
client-key: /home/jenkins/minikube-integration/17967-558382/.minikube/profiles/running-upgrade-152215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-707497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-707497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-707497"

                                                
                                                
----------------------- debugLogs end: cilium-707497 [took: 3.999235725s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-707497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-707497
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
Copied to clipboard