Test Report: KVM_Linux_containerd 18007

                    
                      fc27285b44a3684906f383c28cb886ae15cd7524:2024-01-30:32829
                    
                

Test fail (8/318)

x
+
TestAddons/parallel/Headlamp (3.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-444600 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-444600 --alsologtostderr -v=1: exit status 11 (405.260176ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:28:54.788822   13966 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:28:54.788976   13966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:28:54.788984   13966 out.go:309] Setting ErrFile to fd 2...
	I0130 19:28:54.788989   13966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:28:54.789201   13966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:28:54.789437   13966 mustload.go:65] Loading cluster: addons-444600
	I0130 19:28:54.789766   13966 config.go:182] Loaded profile config "addons-444600": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:28:54.789784   13966 addons.go:597] checking whether the cluster is paused
	I0130 19:28:54.789869   13966 config.go:182] Loaded profile config "addons-444600": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:28:54.789883   13966 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:28:54.790289   13966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:28:54.790330   13966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:28:54.804140   13966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0130 19:28:54.804610   13966 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:28:54.805170   13966 main.go:141] libmachine: Using API Version  1
	I0130 19:28:54.805193   13966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:28:54.805714   13966 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:28:54.805920   13966 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:28:54.807577   13966 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:28:54.807775   13966 ssh_runner.go:195] Run: systemctl --version
	I0130 19:28:54.807804   13966 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:28:54.810257   13966 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:28:54.810736   13966 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:28:54.810775   13966 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:28:54.810911   13966 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:28:54.811088   13966 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:28:54.811271   13966 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:28:54.811436   13966 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:28:54.916070   13966 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0130 19:28:54.916152   13966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 19:28:55.026185   13966 cri.go:89] found id: "9a90e5bd72522c612246d02959b6f8a0229289a0349c3b94668e5f69078e8bdc"
	I0130 19:28:55.026206   13966 cri.go:89] found id: "498f33adbd3c8db4ceb55570caffbfcea372518757b5938cf5955aac935092c4"
	I0130 19:28:55.026211   13966 cri.go:89] found id: "59e97dc35c722cff36bc8128a26ed3bdd1058b497f46670686cc74ac7e1a290b"
	I0130 19:28:55.026215   13966 cri.go:89] found id: "b6a06046fd68d99413bddc47c0ad4aaa148a8c999d34fba73f3cead680b65294"
	I0130 19:28:55.026218   13966 cri.go:89] found id: "18a60975511dee36f68281d2edc9a4e325ee78bf1ffbd17b6b2eebf627c5b717"
	I0130 19:28:55.026224   13966 cri.go:89] found id: "ec825b4b3565a66773e438b05abefba860ab64d0b99c20b6952682031d1c97b5"
	I0130 19:28:55.026228   13966 cri.go:89] found id: "71a7cdfcc920bf53a6f102c8c7872f593af0a0c69dcbf699773e51f36cb57765"
	I0130 19:28:55.026231   13966 cri.go:89] found id: "2dfe227e20f524e347a4a33c0a652205c18a94f3dd02ee0658848d61f401ae03"
	I0130 19:28:55.026234   13966 cri.go:89] found id: "3c3611954a573fa3b72c4ed84fc9414893f9486ddd59129db56edf20140021bd"
	I0130 19:28:55.026240   13966 cri.go:89] found id: "be765bcef4e559d5d026b4c6966786e336f16c098112c44480b25d8283069f95"
	I0130 19:28:55.026249   13966 cri.go:89] found id: "8cd612a793207e1a31c070523dd2db2fc0b0b58dc14db69ade1124fab9a2bce0"
	I0130 19:28:55.026253   13966 cri.go:89] found id: "39dbd9946a21d9ca8b3051ce0e11472ef7df7227193f5941a5c8428fd422f707"
	I0130 19:28:55.026256   13966 cri.go:89] found id: "473f1910dba78dd28c5983f56ff7179ebeb10f558c344b8efc3905db0b525e85"
	I0130 19:28:55.026272   13966 cri.go:89] found id: "e56a231ac6899d59be8c8557484f8e0c758bdded0ed1b609dac7cf8c606d96de"
	I0130 19:28:55.026277   13966 cri.go:89] found id: "4cc468232d18438c645de932419f088bf8b715b39d5d199b8e339922ef9ccbd5"
	I0130 19:28:55.026280   13966 cri.go:89] found id: "54277e83cf20f665a11210cf8c50f80275936e52588d401443c12677cebe175d"
	I0130 19:28:55.026283   13966 cri.go:89] found id: "ad1498343e0aecdacda0e319f44979743e9c77af6e33c981042672efd8143cbe"
	I0130 19:28:55.026287   13966 cri.go:89] found id: "054d8faeaed435069439b44968cd5781d6411766e94e5ad6c6c6549515aa4561"
	I0130 19:28:55.026290   13966 cri.go:89] found id: "61d073cfb2c73b712bbfc8da7e5f7fd579c4568bf965473097bdd6a35a9aabbd"
	I0130 19:28:55.026294   13966 cri.go:89] found id: "4985d47af422b9b72623b0d30a584365872c779cd35a9b2040c8d91f34aa529c"
	I0130 19:28:55.026297   13966 cri.go:89] found id: "9ca0ebe50e27f6c7732cf6999a28c3986871b4f71ac226d9e48fd7871f04f211"
	I0130 19:28:55.026300   13966 cri.go:89] found id: "3d7508efea742f0ae0fdb37cb76e6e0112233e83a81fac0d6ad480246a55652b"
	I0130 19:28:55.026304   13966 cri.go:89] found id: ""
	I0130 19:28:55.026344   13966 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0130 19:28:55.127994   13966 main.go:141] libmachine: Making call to close driver server
	I0130 19:28:55.128017   13966 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:28:55.128300   13966 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:28:55.128321   13966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:28:55.128327   13966 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:28:55.131050   13966 out.go:177] 
	W0130 19:28:55.132756   13966 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-30T19:28:55Z" level=error msg="stat /run/containerd/runc/k8s.io/857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-30T19:28:55Z" level=error msg="stat /run/containerd/runc/k8s.io/857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4: no such file or directory"
	
	W0130 19:28:55.132785   13966 out.go:239] * 
	* 
	W0130 19:28:55.135119   13966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:28:55.136805   13966 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-444600 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-444600 -n addons-444600
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-444600 logs -n 25: (2.180169328s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |                     |
	|         | -p download-only-186149              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| delete  | -p download-only-186149              | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| start   | -o=json --download-only              | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC |                     |
	|         | -p download-only-315124              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| delete  | -p download-only-315124              | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| start   | -o=json --download-only              | download-only-027774 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC |                     |
	|         | -p download-only-027774              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-027774              | download-only-027774 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-186149              | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-315124              | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| delete  | -p download-only-027774              | download-only-027774 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| start   | --download-only -p                   | binary-mirror-338640 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | binary-mirror-338640                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36043               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-338640              | binary-mirror-338640 | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:25 UTC |
	| addons  | disable dashboard -p                 | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | addons-444600                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC |                     |
	|         | addons-444600                        |                      |         |         |                     |                     |
	| start   | -p addons-444600 --wait=true         | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:25 UTC | 30 Jan 24 19:28 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | addons-444600 addons                 | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC | 30 Jan 24 19:28 UTC |
	|         | addons-444600                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-444600        | jenkins | v1.32.0 | 30 Jan 24 19:28 UTC |                     |
	|         | -p addons-444600                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:25:03
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:25:03.314356   12673 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:25:03.314475   12673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:25:03.314498   12673 out.go:309] Setting ErrFile to fd 2...
	I0130 19:25:03.314502   12673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:25:03.314689   12673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:25:03.315333   12673 out.go:303] Setting JSON to false
	I0130 19:25:03.316211   12673 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":448,"bootTime":1706642256,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:25:03.316271   12673 start.go:138] virtualization: kvm guest
	I0130 19:25:03.318582   12673 out.go:177] * [addons-444600] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:25:03.320080   12673 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:25:03.320078   12673 notify.go:220] Checking for updates...
	I0130 19:25:03.321538   12673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:25:03.323214   12673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:25:03.324750   12673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:25:03.326285   12673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:25:03.328304   12673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:25:03.329922   12673 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:25:03.360497   12673 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 19:25:03.361965   12673 start.go:298] selected driver: kvm2
	I0130 19:25:03.361980   12673 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:25:03.361990   12673 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:25:03.362636   12673 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:25:03.362692   12673 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4431/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:25:03.376920   12673 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:25:03.376979   12673 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:25:03.377164   12673 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 19:25:03.377224   12673 cni.go:84] Creating CNI manager for ""
	I0130 19:25:03.377236   12673 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:25:03.377247   12673 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:25:03.377255   12673 start_flags.go:321] config:
	{Name:addons-444600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-444600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:25:03.377386   12673 iso.go:125] acquiring lock: {Name:mk030d287e6065b337323be40f294429c246fc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:25:03.379533   12673 out.go:177] * Starting control plane node addons-444600 in cluster addons-444600
	I0130 19:25:03.381322   12673 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0130 19:25:03.381362   12673 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0130 19:25:03.381372   12673 cache.go:56] Caching tarball of preloaded images
	I0130 19:25:03.381439   12673 preload.go:174] Found /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0130 19:25:03.381448   12673 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0130 19:25:03.381744   12673 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/config.json ...
	I0130 19:25:03.381764   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/config.json: {Name:mk4116022e530efc83b9f0e6adb3db1f4c623597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:03.381929   12673 start.go:365] acquiring machines lock for addons-444600: {Name:mk1b2638de9ddb38ffa3477971a35126ea172f51 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 19:25:03.381982   12673 start.go:369] acquired machines lock for "addons-444600" in 38.66µs
	I0130 19:25:03.381998   12673 start.go:93] Provisioning new machine with config: &{Name:addons-444600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-444600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0130 19:25:03.382053   12673 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 19:25:03.383823   12673 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0130 19:25:03.383970   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:25:03.384004   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:25:03.397612   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0130 19:25:03.398050   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:25:03.398556   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:25:03.398578   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:25:03.398940   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:25:03.399133   12673 main.go:141] libmachine: (addons-444600) Calling .GetMachineName
	I0130 19:25:03.399316   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:03.399487   12673 start.go:159] libmachine.API.Create for "addons-444600" (driver="kvm2")
	I0130 19:25:03.399525   12673 client.go:168] LocalClient.Create starting
	I0130 19:25:03.399572   12673 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem
	I0130 19:25:03.533624   12673 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/cert.pem
	I0130 19:25:03.693118   12673 main.go:141] libmachine: Running pre-create checks...
	I0130 19:25:03.693140   12673 main.go:141] libmachine: (addons-444600) Calling .PreCreateCheck
	I0130 19:25:03.693695   12673 main.go:141] libmachine: (addons-444600) Calling .GetConfigRaw
	I0130 19:25:03.694189   12673 main.go:141] libmachine: Creating machine...
	I0130 19:25:03.694204   12673 main.go:141] libmachine: (addons-444600) Calling .Create
	I0130 19:25:03.694368   12673 main.go:141] libmachine: (addons-444600) Creating KVM machine...
	I0130 19:25:03.695638   12673 main.go:141] libmachine: (addons-444600) DBG | found existing default KVM network
	I0130 19:25:03.696535   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:03.696363   12695 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0130 19:25:03.703749   12673 main.go:141] libmachine: (addons-444600) DBG | trying to create private KVM network mk-addons-444600 192.168.39.0/24...
	I0130 19:25:03.770946   12673 main.go:141] libmachine: (addons-444600) DBG | private KVM network mk-addons-444600 192.168.39.0/24 created
	I0130 19:25:03.770968   12673 main.go:141] libmachine: (addons-444600) Setting up store path in /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600 ...
	I0130 19:25:03.770978   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:03.770898   12695 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:25:03.771011   12673 main.go:141] libmachine: (addons-444600) Building disk image from file:///home/jenkins/minikube-integration/18007-4431/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 19:25:03.771079   12673 main.go:141] libmachine: (addons-444600) Downloading /home/jenkins/minikube-integration/18007-4431/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18007-4431/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 19:25:03.988462   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:03.988335   12695 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa...
	I0130 19:25:04.313363   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:04.313225   12695 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/addons-444600.rawdisk...
	I0130 19:25:04.313413   12673 main.go:141] libmachine: (addons-444600) DBG | Writing magic tar header
	I0130 19:25:04.313423   12673 main.go:141] libmachine: (addons-444600) DBG | Writing SSH key tar header
	I0130 19:25:04.313432   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:04.313358   12695 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600 ...
	I0130 19:25:04.313498   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600
	I0130 19:25:04.313548   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600 (perms=drwx------)
	I0130 19:25:04.313575   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4431/.minikube/machines
	I0130 19:25:04.313588   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins/minikube-integration/18007-4431/.minikube/machines (perms=drwxr-xr-x)
	I0130 19:25:04.313600   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins/minikube-integration/18007-4431/.minikube (perms=drwxr-xr-x)
	I0130 19:25:04.313607   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins/minikube-integration/18007-4431 (perms=drwxrwxr-x)
	I0130 19:25:04.313615   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:25:04.313627   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18007-4431
	I0130 19:25:04.313636   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 19:25:04.313648   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 19:25:04.313654   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home/jenkins
	I0130 19:25:04.313660   12673 main.go:141] libmachine: (addons-444600) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 19:25:04.313668   12673 main.go:141] libmachine: (addons-444600) Creating domain...
	I0130 19:25:04.313675   12673 main.go:141] libmachine: (addons-444600) DBG | Checking permissions on dir: /home
	I0130 19:25:04.313680   12673 main.go:141] libmachine: (addons-444600) DBG | Skipping /home - not owner
	I0130 19:25:04.314681   12673 main.go:141] libmachine: (addons-444600) define libvirt domain using xml: 
	I0130 19:25:04.314694   12673 main.go:141] libmachine: (addons-444600) <domain type='kvm'>
	I0130 19:25:04.314700   12673 main.go:141] libmachine: (addons-444600)   <name>addons-444600</name>
	I0130 19:25:04.314706   12673 main.go:141] libmachine: (addons-444600)   <memory unit='MiB'>4000</memory>
	I0130 19:25:04.314718   12673 main.go:141] libmachine: (addons-444600)   <vcpu>2</vcpu>
	I0130 19:25:04.314733   12673 main.go:141] libmachine: (addons-444600)   <features>
	I0130 19:25:04.314748   12673 main.go:141] libmachine: (addons-444600)     <acpi/>
	I0130 19:25:04.314759   12673 main.go:141] libmachine: (addons-444600)     <apic/>
	I0130 19:25:04.314769   12673 main.go:141] libmachine: (addons-444600)     <pae/>
	I0130 19:25:04.314776   12673 main.go:141] libmachine: (addons-444600)     
	I0130 19:25:04.314783   12673 main.go:141] libmachine: (addons-444600)   </features>
	I0130 19:25:04.314790   12673 main.go:141] libmachine: (addons-444600)   <cpu mode='host-passthrough'>
	I0130 19:25:04.314796   12673 main.go:141] libmachine: (addons-444600)   
	I0130 19:25:04.314803   12673 main.go:141] libmachine: (addons-444600)   </cpu>
	I0130 19:25:04.314809   12673 main.go:141] libmachine: (addons-444600)   <os>
	I0130 19:25:04.314819   12673 main.go:141] libmachine: (addons-444600)     <type>hvm</type>
	I0130 19:25:04.314827   12673 main.go:141] libmachine: (addons-444600)     <boot dev='cdrom'/>
	I0130 19:25:04.314833   12673 main.go:141] libmachine: (addons-444600)     <boot dev='hd'/>
	I0130 19:25:04.314842   12673 main.go:141] libmachine: (addons-444600)     <bootmenu enable='no'/>
	I0130 19:25:04.314848   12673 main.go:141] libmachine: (addons-444600)   </os>
	I0130 19:25:04.314856   12673 main.go:141] libmachine: (addons-444600)   <devices>
	I0130 19:25:04.314862   12673 main.go:141] libmachine: (addons-444600)     <disk type='file' device='cdrom'>
	I0130 19:25:04.314876   12673 main.go:141] libmachine: (addons-444600)       <source file='/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/boot2docker.iso'/>
	I0130 19:25:04.314884   12673 main.go:141] libmachine: (addons-444600)       <target dev='hdc' bus='scsi'/>
	I0130 19:25:04.314890   12673 main.go:141] libmachine: (addons-444600)       <readonly/>
	I0130 19:25:04.314897   12673 main.go:141] libmachine: (addons-444600)     </disk>
	I0130 19:25:04.314904   12673 main.go:141] libmachine: (addons-444600)     <disk type='file' device='disk'>
	I0130 19:25:04.314912   12673 main.go:141] libmachine: (addons-444600)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 19:25:04.314923   12673 main.go:141] libmachine: (addons-444600)       <source file='/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/addons-444600.rawdisk'/>
	I0130 19:25:04.314931   12673 main.go:141] libmachine: (addons-444600)       <target dev='hda' bus='virtio'/>
	I0130 19:25:04.314937   12673 main.go:141] libmachine: (addons-444600)     </disk>
	I0130 19:25:04.314944   12673 main.go:141] libmachine: (addons-444600)     <interface type='network'>
	I0130 19:25:04.314952   12673 main.go:141] libmachine: (addons-444600)       <source network='mk-addons-444600'/>
	I0130 19:25:04.314959   12673 main.go:141] libmachine: (addons-444600)       <model type='virtio'/>
	I0130 19:25:04.314972   12673 main.go:141] libmachine: (addons-444600)     </interface>
	I0130 19:25:04.314980   12673 main.go:141] libmachine: (addons-444600)     <interface type='network'>
	I0130 19:25:04.314989   12673 main.go:141] libmachine: (addons-444600)       <source network='default'/>
	I0130 19:25:04.314997   12673 main.go:141] libmachine: (addons-444600)       <model type='virtio'/>
	I0130 19:25:04.315005   12673 main.go:141] libmachine: (addons-444600)     </interface>
	I0130 19:25:04.315011   12673 main.go:141] libmachine: (addons-444600)     <serial type='pty'>
	I0130 19:25:04.315031   12673 main.go:141] libmachine: (addons-444600)       <target port='0'/>
	I0130 19:25:04.315042   12673 main.go:141] libmachine: (addons-444600)     </serial>
	I0130 19:25:04.315050   12673 main.go:141] libmachine: (addons-444600)     <console type='pty'>
	I0130 19:25:04.315058   12673 main.go:141] libmachine: (addons-444600)       <target type='serial' port='0'/>
	I0130 19:25:04.315066   12673 main.go:141] libmachine: (addons-444600)     </console>
	I0130 19:25:04.315071   12673 main.go:141] libmachine: (addons-444600)     <rng model='virtio'>
	I0130 19:25:04.315080   12673 main.go:141] libmachine: (addons-444600)       <backend model='random'>/dev/random</backend>
	I0130 19:25:04.315087   12673 main.go:141] libmachine: (addons-444600)     </rng>
	I0130 19:25:04.315093   12673 main.go:141] libmachine: (addons-444600)     
	I0130 19:25:04.315099   12673 main.go:141] libmachine: (addons-444600)     
	I0130 19:25:04.315105   12673 main.go:141] libmachine: (addons-444600)   </devices>
	I0130 19:25:04.315115   12673 main.go:141] libmachine: (addons-444600) </domain>
	I0130 19:25:04.315125   12673 main.go:141] libmachine: (addons-444600) 
	I0130 19:25:04.321018   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:4d:b1:c6 in network default
	I0130 19:25:04.321536   12673 main.go:141] libmachine: (addons-444600) Ensuring networks are active...
	I0130 19:25:04.321560   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:04.322234   12673 main.go:141] libmachine: (addons-444600) Ensuring network default is active
	I0130 19:25:04.322494   12673 main.go:141] libmachine: (addons-444600) Ensuring network mk-addons-444600 is active
	I0130 19:25:04.322997   12673 main.go:141] libmachine: (addons-444600) Getting domain xml...
	I0130 19:25:04.323633   12673 main.go:141] libmachine: (addons-444600) Creating domain...
	I0130 19:25:05.546178   12673 main.go:141] libmachine: (addons-444600) Waiting to get IP...
	I0130 19:25:05.547052   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:05.547480   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:05.547506   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:05.547461   12695 retry.go:31] will retry after 228.311294ms: waiting for machine to come up
	I0130 19:25:05.777949   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:05.778328   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:05.778347   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:05.778232   12695 retry.go:31] will retry after 355.440678ms: waiting for machine to come up
	I0130 19:25:06.135674   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:06.136122   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:06.136149   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:06.136076   12695 retry.go:31] will retry after 413.00032ms: waiting for machine to come up
	I0130 19:25:06.550266   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:06.550646   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:06.550686   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:06.550606   12695 retry.go:31] will retry after 557.41469ms: waiting for machine to come up
	I0130 19:25:07.109175   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:07.109653   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:07.109678   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:07.109615   12695 retry.go:31] will retry after 653.699079ms: waiting for machine to come up
	I0130 19:25:07.764532   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:07.764941   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:07.764967   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:07.764883   12695 retry.go:31] will retry after 794.508979ms: waiting for machine to come up
	I0130 19:25:08.561931   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:08.562401   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:08.562430   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:08.562356   12695 retry.go:31] will retry after 1.108902072s: waiting for machine to come up
	I0130 19:25:09.672537   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:09.672841   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:09.672870   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:09.672800   12695 retry.go:31] will retry after 1.013276918s: waiting for machine to come up
	I0130 19:25:10.688049   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:10.688428   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:10.688456   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:10.688379   12695 retry.go:31] will retry after 1.245293852s: waiting for machine to come up
	I0130 19:25:11.935805   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:11.936200   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:11.936229   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:11.936136   12695 retry.go:31] will retry after 1.875887647s: waiting for machine to come up
	I0130 19:25:13.814107   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:13.814569   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:13.814614   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:13.814543   12695 retry.go:31] will retry after 2.109599414s: waiting for machine to come up
	I0130 19:25:15.926773   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:15.927217   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:15.927246   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:15.927171   12695 retry.go:31] will retry after 3.232682743s: waiting for machine to come up
	I0130 19:25:19.161178   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:19.161499   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:19.161530   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:19.161432   12695 retry.go:31] will retry after 4.015120423s: waiting for machine to come up
	I0130 19:25:23.181400   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:23.181799   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find current IP address of domain addons-444600 in network mk-addons-444600
	I0130 19:25:23.181822   12673 main.go:141] libmachine: (addons-444600) DBG | I0130 19:25:23.181768   12695 retry.go:31] will retry after 5.387489914s: waiting for machine to come up
	I0130 19:25:28.571092   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.571569   12673 main.go:141] libmachine: (addons-444600) Found IP for machine: 192.168.39.249
	I0130 19:25:28.571590   12673 main.go:141] libmachine: (addons-444600) Reserving static IP address...
	I0130 19:25:28.571602   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has current primary IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.572040   12673 main.go:141] libmachine: (addons-444600) DBG | unable to find host DHCP lease matching {name: "addons-444600", mac: "52:54:00:fd:7c:96", ip: "192.168.39.249"} in network mk-addons-444600
	I0130 19:25:28.647775   12673 main.go:141] libmachine: (addons-444600) DBG | Getting to WaitForSSH function...
	I0130 19:25:28.647800   12673 main.go:141] libmachine: (addons-444600) Reserved static IP address: 192.168.39.249
	I0130 19:25:28.647815   12673 main.go:141] libmachine: (addons-444600) Waiting for SSH to be available...
	I0130 19:25:28.650483   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.650875   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:28.650927   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.651212   12673 main.go:141] libmachine: (addons-444600) DBG | Using SSH client type: external
	I0130 19:25:28.651235   12673 main.go:141] libmachine: (addons-444600) DBG | Using SSH private key: /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa (-rw-------)
	I0130 19:25:28.651259   12673 main.go:141] libmachine: (addons-444600) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 19:25:28.651280   12673 main.go:141] libmachine: (addons-444600) DBG | About to run SSH command:
	I0130 19:25:28.651289   12673 main.go:141] libmachine: (addons-444600) DBG | exit 0
	I0130 19:25:28.748112   12673 main.go:141] libmachine: (addons-444600) DBG | SSH cmd err, output: <nil>: 
	I0130 19:25:28.748383   12673 main.go:141] libmachine: (addons-444600) KVM machine creation complete!
	I0130 19:25:28.748683   12673 main.go:141] libmachine: (addons-444600) Calling .GetConfigRaw
	I0130 19:25:28.749206   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:28.749382   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:28.749538   12673 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 19:25:28.749554   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:25:28.750862   12673 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 19:25:28.750880   12673 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 19:25:28.750890   12673 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 19:25:28.750900   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:28.753201   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.753484   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:28.753515   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.753595   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:28.753800   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.753949   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.754110   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:28.754288   12673 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:28.754672   12673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0130 19:25:28.754696   12673 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 19:25:28.871187   12673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:25:28.871209   12673 main.go:141] libmachine: Detecting the provisioner...
	I0130 19:25:28.871217   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:28.873926   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.874262   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:28.874292   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.874394   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:28.874577   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.874805   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.874943   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:28.875226   12673 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:28.875567   12673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0130 19:25:28.875579   12673 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 19:25:28.992929   12673 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 19:25:28.993019   12673 main.go:141] libmachine: found compatible host: buildroot
	I0130 19:25:28.993041   12673 main.go:141] libmachine: Provisioning with buildroot...
	I0130 19:25:28.993049   12673 main.go:141] libmachine: (addons-444600) Calling .GetMachineName
	I0130 19:25:28.993308   12673 buildroot.go:166] provisioning hostname "addons-444600"
	I0130 19:25:28.993343   12673 main.go:141] libmachine: (addons-444600) Calling .GetMachineName
	I0130 19:25:28.993504   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:28.995931   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.996395   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:28.996424   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:28.996563   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:28.996813   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.996996   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:28.997163   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:28.997318   12673 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:28.997645   12673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0130 19:25:28.997663   12673 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-444600 && echo "addons-444600" | sudo tee /etc/hostname
	I0130 19:25:29.124624   12673 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444600
	
	I0130 19:25:29.124657   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.127574   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.127941   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.127969   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.128119   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.128310   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.128466   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.128572   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.128763   12673 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:29.129101   12673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0130 19:25:29.129119   12673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-444600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-444600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-444600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 19:25:29.256455   12673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 19:25:29.256490   12673 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18007-4431/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-4431/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-4431/.minikube}
	I0130 19:25:29.256528   12673 buildroot.go:174] setting up certificates
	I0130 19:25:29.256548   12673 provision.go:83] configureAuth start
	I0130 19:25:29.256567   12673 main.go:141] libmachine: (addons-444600) Calling .GetMachineName
	I0130 19:25:29.256846   12673 main.go:141] libmachine: (addons-444600) Calling .GetIP
	I0130 19:25:29.259104   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.259441   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.259468   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.259598   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.261637   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.261943   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.261967   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.262097   12673 provision.go:138] copyHostCerts
	I0130 19:25:29.262178   12673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-4431/.minikube/ca.pem (1078 bytes)
	I0130 19:25:29.262309   12673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-4431/.minikube/cert.pem (1123 bytes)
	I0130 19:25:29.262391   12673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-4431/.minikube/key.pem (1679 bytes)
	I0130 19:25:29.262452   12673 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-4431/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca-key.pem org=jenkins.addons-444600 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube addons-444600]
	I0130 19:25:29.404403   12673 provision.go:172] copyRemoteCerts
	I0130 19:25:29.404469   12673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 19:25:29.404498   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.407043   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.407370   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.407401   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.407573   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.407755   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.407897   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.407996   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:25:29.493443   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 19:25:29.515747   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 19:25:29.537632   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0130 19:25:29.559370   12673 provision.go:86] duration metric: configureAuth took 302.805972ms
	I0130 19:25:29.559398   12673 buildroot.go:189] setting minikube options for container-runtime
	I0130 19:25:29.559615   12673 config.go:182] Loaded profile config "addons-444600": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:25:29.559641   12673 main.go:141] libmachine: Checking connection to Docker...
	I0130 19:25:29.559654   12673 main.go:141] libmachine: (addons-444600) Calling .GetURL
	I0130 19:25:29.560755   12673 main.go:141] libmachine: (addons-444600) DBG | Using libvirt version 6000000
	I0130 19:25:29.562472   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.562849   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.562877   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.563006   12673 main.go:141] libmachine: Docker is up and running!
	I0130 19:25:29.563020   12673 main.go:141] libmachine: Reticulating splines...
	I0130 19:25:29.563026   12673 client.go:171] LocalClient.Create took 26.163491247s
	I0130 19:25:29.563040   12673 start.go:167] duration metric: libmachine.API.Create for "addons-444600" took 26.163556333s
	I0130 19:25:29.563049   12673 start.go:300] post-start starting for "addons-444600" (driver="kvm2")
	I0130 19:25:29.563065   12673 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 19:25:29.563081   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:29.563273   12673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 19:25:29.563307   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.565571   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.565934   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.565959   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.566149   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.566339   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.566514   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.566663   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:25:29.653548   12673 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 19:25:29.658502   12673 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 19:25:29.658527   12673 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4431/.minikube/addons for local assets ...
	I0130 19:25:29.658593   12673 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-4431/.minikube/files for local assets ...
	I0130 19:25:29.658617   12673 start.go:303] post-start completed in 95.563549ms
	I0130 19:25:29.658644   12673 main.go:141] libmachine: (addons-444600) Calling .GetConfigRaw
	I0130 19:25:29.659165   12673 main.go:141] libmachine: (addons-444600) Calling .GetIP
	I0130 19:25:29.661620   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.661900   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.661929   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.662190   12673 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/config.json ...
	I0130 19:25:29.662362   12673 start.go:128] duration metric: createHost completed in 26.280300347s
	I0130 19:25:29.662383   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.664496   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.664969   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.665000   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.665019   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.665224   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.665368   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.665499   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.665682   12673 main.go:141] libmachine: Using SSH client type: native
	I0130 19:25:29.665995   12673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0130 19:25:29.666007   12673 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 19:25:29.781078   12673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706642729.765992320
	
	I0130 19:25:29.781105   12673 fix.go:206] guest clock: 1706642729.765992320
	I0130 19:25:29.781114   12673 fix.go:219] Guest: 2024-01-30 19:25:29.76599232 +0000 UTC Remote: 2024-01-30 19:25:29.662373984 +0000 UTC m=+26.395805341 (delta=103.618336ms)
	I0130 19:25:29.781159   12673 fix.go:190] guest clock delta is within tolerance: 103.618336ms
	I0130 19:25:29.781166   12673 start.go:83] releasing machines lock for "addons-444600", held for 26.399174595s
	I0130 19:25:29.781194   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:29.781452   12673 main.go:141] libmachine: (addons-444600) Calling .GetIP
	I0130 19:25:29.784022   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.784345   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.784369   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.784517   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:29.784980   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:29.785117   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:25:29.785246   12673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 19:25:29.785299   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.785345   12673 ssh_runner.go:195] Run: cat /version.json
	I0130 19:25:29.785374   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:25:29.787764   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.788036   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.788077   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.788117   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.788285   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.788370   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:29.788397   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:29.788462   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.788556   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:25:29.788620   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.788688   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:25:29.788786   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:25:29.788800   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:25:29.788970   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:25:29.892982   12673 ssh_runner.go:195] Run: systemctl --version
	I0130 19:25:29.898712   12673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 19:25:29.903822   12673 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 19:25:29.903876   12673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 19:25:29.918278   12673 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 19:25:29.918302   12673 start.go:475] detecting cgroup driver to use...
	I0130 19:25:29.918367   12673 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0130 19:25:29.956976   12673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0130 19:25:29.970405   12673 docker.go:217] disabling cri-docker service (if available) ...
	I0130 19:25:29.970471   12673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 19:25:29.983565   12673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 19:25:29.996040   12673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 19:25:30.097603   12673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 19:25:30.221473   12673 docker.go:233] disabling docker service ...
	I0130 19:25:30.221533   12673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 19:25:30.234977   12673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 19:25:30.246420   12673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 19:25:30.364458   12673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 19:25:30.482428   12673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 19:25:30.494250   12673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 19:25:30.510617   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0130 19:25:30.519327   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0130 19:25:30.527841   12673 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0130 19:25:30.527901   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0130 19:25:30.536534   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0130 19:25:30.545163   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0130 19:25:30.553795   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0130 19:25:30.562548   12673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 19:25:30.571688   12673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0130 19:25:30.580636   12673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 19:25:30.588630   12673 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 19:25:30.588689   12673 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 19:25:30.600973   12673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 19:25:30.609518   12673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 19:25:30.719200   12673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0130 19:25:30.749875   12673 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0130 19:25:30.750010   12673 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0130 19:25:30.754043   12673 retry.go:31] will retry after 1.322627654s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0130 19:25:32.077533   12673 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0130 19:25:32.082655   12673 start.go:543] Will wait 60s for crictl version
	I0130 19:25:32.082726   12673 ssh_runner.go:195] Run: which crictl
	I0130 19:25:32.086468   12673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 19:25:32.122688   12673 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0130 19:25:32.122772   12673 ssh_runner.go:195] Run: containerd --version
	I0130 19:25:32.149188   12673 ssh_runner.go:195] Run: containerd --version
	I0130 19:25:32.177140   12673 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0130 19:25:32.179000   12673 main.go:141] libmachine: (addons-444600) Calling .GetIP
	I0130 19:25:32.181769   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:32.182179   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:25:32.182203   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:25:32.182444   12673 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 19:25:32.186452   12673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:25:32.198977   12673 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0130 19:25:32.199036   12673 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:25:32.234032   12673 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 19:25:32.234091   12673 ssh_runner.go:195] Run: which lz4
	I0130 19:25:32.237966   12673 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 19:25:32.242009   12673 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 19:25:32.242045   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0130 19:25:33.930747   12673 containerd.go:548] Took 1.692803 seconds to copy over tarball
	I0130 19:25:33.930822   12673 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 19:25:37.060253   12673 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.129405192s)
	I0130 19:25:37.060279   12673 containerd.go:555] Took 3.129508 seconds to extract the tarball
	I0130 19:25:37.060291   12673 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 19:25:37.100796   12673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 19:25:37.204432   12673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0130 19:25:37.227792   12673 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:25:37.284867   12673 retry.go:31] will retry after 326.857674ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-30T19:25:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0130 19:25:37.612465   12673 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 19:25:37.661853   12673 containerd.go:612] all images are preloaded for containerd runtime.
	I0130 19:25:37.661877   12673 cache_images.go:84] Images are preloaded, skipping loading
	I0130 19:25:37.661934   12673 ssh_runner.go:195] Run: sudo crictl info
	I0130 19:25:37.704462   12673 cni.go:84] Creating CNI manager for ""
	I0130 19:25:37.704484   12673 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:25:37.704503   12673 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 19:25:37.704520   12673 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-444600 NodeName:addons-444600 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 19:25:37.704639   12673 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-444600"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 19:25:37.704699   12673 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-444600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-444600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 19:25:37.704752   12673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 19:25:37.713868   12673 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 19:25:37.713954   12673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 19:25:37.722756   12673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0130 19:25:37.739573   12673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 19:25:37.755751   12673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0130 19:25:37.771496   12673 ssh_runner.go:195] Run: grep 192.168.39.249	control-plane.minikube.internal$ /etc/hosts
	I0130 19:25:37.775128   12673 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 19:25:37.787652   12673 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600 for IP: 192.168.39.249
	I0130 19:25:37.787707   12673 certs.go:190] acquiring lock for shared ca certs: {Name:mkfbadca3741e9c7130b699b3606934dd4dd2a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:37.787841   12673 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18007-4431/.minikube/ca.key
	I0130 19:25:38.148668   12673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4431/.minikube/ca.crt ...
	I0130 19:25:38.148695   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/ca.crt: {Name:mkeb3b1de9a4eb5793fa58fda79c11de1199b279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.148838   12673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4431/.minikube/ca.key ...
	I0130 19:25:38.148849   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/ca.key: {Name:mk4ccd0646ce7a95af016cd415b1e38e1e4dbab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.148910   12673 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.key
	I0130 19:25:38.263956   12673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.crt ...
	I0130 19:25:38.263990   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.crt: {Name:mk9152d349aa0f702c49fb45dc91df5728d8ae97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.264225   12673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.key ...
	I0130 19:25:38.264241   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.key: {Name:mka17b9268bac7cc0cced091102c9c4b1466bec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.264372   12673 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.key
	I0130 19:25:38.264389   12673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt with IP's: []
	I0130 19:25:38.464732   12673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt ...
	I0130 19:25:38.464761   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: {Name:mke38de96cd1b08151cc4ab0c0ee2a6ee4b79551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.464907   12673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.key ...
	I0130 19:25:38.464918   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.key: {Name:mk77764dcca6c46746ebe9467f331295ea6f210f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.464982   12673 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key.671bc323
	I0130 19:25:38.464998   12673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt.671bc323 with IP's: [192.168.39.249 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 19:25:38.699928   12673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt.671bc323 ...
	I0130 19:25:38.699958   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt.671bc323: {Name:mk7c0629e9d1aa0c58e9b864af7d21dd833814f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.700116   12673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key.671bc323 ...
	I0130 19:25:38.700129   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key.671bc323: {Name:mk581c11a9e673fdf1cd1c5f2f7723ffba6aef37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.700218   12673 certs.go:337] copying /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt.671bc323 -> /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt
	I0130 19:25:38.700317   12673 certs.go:341] copying /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key.671bc323 -> /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key
	I0130 19:25:38.700373   12673 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.key
	I0130 19:25:38.700389   12673 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.crt with IP's: []
	I0130 19:25:38.999688   12673 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.crt ...
	I0130 19:25:38.999719   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.crt: {Name:mkb8c2cb13df40662ed9335dbf98c018f2cfa014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:38.999872   12673 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.key ...
	I0130 19:25:38.999883   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.key: {Name:mkc8c307f1c0de587eb754f2b0a715eb668701e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:25:39.000200   12673 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca-key.pem (1675 bytes)
	I0130 19:25:39.000255   12673 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/home/jenkins/minikube-integration/18007-4431/.minikube/certs/ca.pem (1078 bytes)
	I0130 19:25:39.000284   12673 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/home/jenkins/minikube-integration/18007-4431/.minikube/certs/cert.pem (1123 bytes)
	I0130 19:25:39.000318   12673 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-4431/.minikube/certs/home/jenkins/minikube-integration/18007-4431/.minikube/certs/key.pem (1679 bytes)
	I0130 19:25:39.000860   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 19:25:39.025616   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 19:25:39.048498   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 19:25:39.071067   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 19:25:39.093298   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 19:25:39.115327   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 19:25:39.137953   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 19:25:39.160349   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 19:25:39.182823   12673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-4431/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 19:25:39.205376   12673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 19:25:39.221184   12673 ssh_runner.go:195] Run: openssl version
	I0130 19:25:39.226530   12673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 19:25:39.236127   12673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:39.240667   12673 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:39.240743   12673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 19:25:39.246196   12673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 19:25:39.255647   12673 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 19:25:39.259651   12673 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 19:25:39.259712   12673 kubeadm.go:404] StartCluster: {Name:addons-444600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-444600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:25:39.259794   12673 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0130 19:25:39.259844   12673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 19:25:39.296751   12673 cri.go:89] found id: ""
	I0130 19:25:39.296833   12673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 19:25:39.305300   12673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 19:25:39.313408   12673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 19:25:39.321332   12673 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 19:25:39.321368   12673 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 19:25:39.512844   12673 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 19:25:51.705006   12673 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 19:25:51.705068   12673 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 19:25:51.705175   12673 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 19:25:51.705304   12673 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 19:25:51.705451   12673 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 19:25:51.705545   12673 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 19:25:51.707702   12673 out.go:204]   - Generating certificates and keys ...
	I0130 19:25:51.707813   12673 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 19:25:51.707888   12673 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 19:25:51.707964   12673 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 19:25:51.708040   12673 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 19:25:51.708129   12673 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 19:25:51.708216   12673 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 19:25:51.708291   12673 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 19:25:51.708428   12673 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-444600 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0130 19:25:51.708487   12673 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 19:25:51.708604   12673 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-444600 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0130 19:25:51.708678   12673 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 19:25:51.708745   12673 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 19:25:51.708813   12673 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 19:25:51.708876   12673 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 19:25:51.708947   12673 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 19:25:51.709054   12673 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 19:25:51.709143   12673 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 19:25:51.709225   12673 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 19:25:51.709345   12673 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 19:25:51.709437   12673 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 19:25:51.711133   12673 out.go:204]   - Booting up control plane ...
	I0130 19:25:51.711239   12673 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 19:25:51.711346   12673 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 19:25:51.711447   12673 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 19:25:51.711595   12673 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 19:25:51.711724   12673 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 19:25:51.711782   12673 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 19:25:51.711987   12673 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 19:25:51.712082   12673 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004663 seconds
	I0130 19:25:51.712241   12673 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 19:25:51.712379   12673 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 19:25:51.712474   12673 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 19:25:51.712643   12673 kubeadm.go:322] [mark-control-plane] Marking the node addons-444600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 19:25:51.712730   12673 kubeadm.go:322] [bootstrap-token] Using token: f76z4w.yy2m1b5dy2dmpt50
	I0130 19:25:51.714322   12673 out.go:204]   - Configuring RBAC rules ...
	I0130 19:25:51.714434   12673 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 19:25:51.714534   12673 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 19:25:51.714670   12673 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 19:25:51.714779   12673 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 19:25:51.714871   12673 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 19:25:51.714940   12673 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 19:25:51.715042   12673 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 19:25:51.715077   12673 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 19:25:51.715132   12673 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 19:25:51.715146   12673 kubeadm.go:322] 
	I0130 19:25:51.715223   12673 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 19:25:51.715233   12673 kubeadm.go:322] 
	I0130 19:25:51.715333   12673 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 19:25:51.715345   12673 kubeadm.go:322] 
	I0130 19:25:51.715368   12673 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 19:25:51.715416   12673 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 19:25:51.715465   12673 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 19:25:51.715471   12673 kubeadm.go:322] 
	I0130 19:25:51.715522   12673 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 19:25:51.715529   12673 kubeadm.go:322] 
	I0130 19:25:51.715577   12673 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 19:25:51.715586   12673 kubeadm.go:322] 
	I0130 19:25:51.715627   12673 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 19:25:51.715721   12673 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 19:25:51.715818   12673 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 19:25:51.715827   12673 kubeadm.go:322] 
	I0130 19:25:51.715930   12673 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 19:25:51.716049   12673 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 19:25:51.716062   12673 kubeadm.go:322] 
	I0130 19:25:51.716183   12673 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f76z4w.yy2m1b5dy2dmpt50 \
	I0130 19:25:51.716322   12673 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac11e5ae1c7c64ef4e553c274b250919664a1537940d63bab67fcd69d1331104 \
	I0130 19:25:51.716358   12673 kubeadm.go:322] 	--control-plane 
	I0130 19:25:51.716367   12673 kubeadm.go:322] 
	I0130 19:25:51.716477   12673 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 19:25:51.716487   12673 kubeadm.go:322] 
	I0130 19:25:51.716615   12673 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f76z4w.yy2m1b5dy2dmpt50 \
	I0130 19:25:51.716756   12673 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac11e5ae1c7c64ef4e553c274b250919664a1537940d63bab67fcd69d1331104 
	I0130 19:25:51.716770   12673 cni.go:84] Creating CNI manager for ""
	I0130 19:25:51.716777   12673 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:25:51.718525   12673 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 19:25:51.720122   12673 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 19:25:51.733552   12673 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 19:25:51.755705   12673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 19:25:51.755745   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:51.755763   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=addons-444600 minikube.k8s.io/updated_at=2024_01_30T19_25_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:51.822059   12673 ops.go:34] apiserver oom_adj: -16
	I0130 19:25:51.952825   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:52.453793   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:52.953738   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:53.453466   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:53.953612   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:54.452996   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:54.953711   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:55.453602   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:55.952953   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:56.452934   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:56.953249   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:57.453179   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:57.952835   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:58.453368   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:58.953592   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:59.453884   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:25:59.953024   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:00.453099   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:00.953787   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:01.453714   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:01.953666   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:02.453551   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:02.953392   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:03.453152   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:03.953366   12673 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 19:26:04.054179   12673 kubeadm.go:1088] duration metric: took 12.29849663s to wait for elevateKubeSystemPrivileges.
	I0130 19:26:04.054220   12673 kubeadm.go:406] StartCluster complete in 24.79451101s
	I0130 19:26:04.054242   12673 settings.go:142] acquiring lock: {Name:mk5588669dc9bbddec12ea38d35e87aa52e8d27e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:26:04.054384   12673 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:26:04.054925   12673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/kubeconfig: {Name:mk107d2f302314ea1bc3d465602d5e12f22bcea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:26:04.055154   12673 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 19:26:04.055243   12673 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0130 19:26:04.055336   12673 addons.go:69] Setting yakd=true in profile "addons-444600"
	I0130 19:26:04.055359   12673 addons.go:234] Setting addon yakd=true in "addons-444600"
	I0130 19:26:04.055423   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.055447   12673 config.go:182] Loaded profile config "addons-444600": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:26:04.055500   12673 addons.go:69] Setting ingress-dns=true in profile "addons-444600"
	I0130 19:26:04.055523   12673 addons.go:234] Setting addon ingress-dns=true in "addons-444600"
	I0130 19:26:04.055587   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.055794   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.055828   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.055900   12673 addons.go:69] Setting registry=true in profile "addons-444600"
	I0130 19:26:04.055924   12673 addons.go:234] Setting addon registry=true in "addons-444600"
	I0130 19:26:04.055962   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.055977   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.055993   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056185   12673 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-444600"
	I0130 19:26:04.056203   12673 addons.go:69] Setting default-storageclass=true in profile "addons-444600"
	I0130 19:26:04.056213   12673 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-444600"
	I0130 19:26:04.056231   12673 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-444600"
	I0130 19:26:04.056225   12673 addons.go:69] Setting metrics-server=true in profile "addons-444600"
	I0130 19:26:04.056230   12673 addons.go:69] Setting inspektor-gadget=true in profile "addons-444600"
	I0130 19:26:04.056259   12673 addons.go:234] Setting addon inspektor-gadget=true in "addons-444600"
	I0130 19:26:04.056259   12673 addons.go:234] Setting addon metrics-server=true in "addons-444600"
	I0130 19:26:04.056297   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.056300   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.056374   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.056399   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056579   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.056596   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056620   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.056627   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.056634   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.056639   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056644   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056662   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.056690   12673 addons.go:69] Setting cloud-spanner=true in profile "addons-444600"
	I0130 19:26:04.056692   12673 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-444600"
	I0130 19:26:04.056703   12673 addons.go:234] Setting addon cloud-spanner=true in "addons-444600"
	I0130 19:26:04.056709   12673 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-444600"
	I0130 19:26:04.056713   12673 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-444600"
	I0130 19:26:04.056726   12673 addons.go:69] Setting helm-tiller=true in profile "addons-444600"
	I0130 19:26:04.056743   12673 addons.go:234] Setting addon helm-tiller=true in "addons-444600"
	I0130 19:26:04.056778   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.056779   12673 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-444600"
	I0130 19:26:04.056814   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057104   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.057127   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.057132   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.057145   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.057174   12673 addons.go:69] Setting gcp-auth=true in profile "addons-444600"
	I0130 19:26:04.057189   12673 addons.go:69] Setting ingress=true in profile "addons-444600"
	I0130 19:26:04.057191   12673 mustload.go:65] Loading cluster: addons-444600
	I0130 19:26:04.057201   12673 addons.go:234] Setting addon ingress=true in "addons-444600"
	I0130 19:26:04.057220   12673 addons.go:69] Setting volumesnapshots=true in profile "addons-444600"
	I0130 19:26:04.057231   12673 addons.go:234] Setting addon volumesnapshots=true in "addons-444600"
	I0130 19:26:04.057265   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057364   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057439   12673 addons.go:69] Setting storage-provisioner=true in profile "addons-444600"
	I0130 19:26:04.057451   12673 addons.go:234] Setting addon storage-provisioner=true in "addons-444600"
	I0130 19:26:04.057461   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057481   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057717   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.057743   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.057787   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.057804   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.057809   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.057840   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.057922   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.057935   12673 config.go:182] Loaded profile config "addons-444600": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:26:04.058255   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.058255   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.058277   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.058380   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.076391   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0130 19:26:04.076624   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0130 19:26:04.076997   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0130 19:26:04.077563   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.078028   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0130 19:26:04.078255   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.078295   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.078312   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.078340   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.078379   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0130 19:26:04.078739   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.078785   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.078801   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.078861   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.079033   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.079198   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.079218   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.079864   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.080055   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.080116   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.080132   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.080142   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.080163   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.080535   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.080641   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.080708   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.082255   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44779
	I0130 19:26:04.086808   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0130 19:26:04.088579   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.088600   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.088669   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.088712   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.088909   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.088933   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.090112   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.090223   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.090250   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.090307   12673 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-444600"
	I0130 19:26:04.090346   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.090812   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.090827   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.091118   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.091137   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.091250   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.091280   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.091742   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.091824   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.092621   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.092654   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.094661   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.094683   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.095109   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.095283   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.102208   12673 addons.go:234] Setting addon default-storageclass=true in "addons-444600"
	I0130 19:26:04.102311   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.102750   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.102909   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.112111   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0130 19:26:04.112578   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.113053   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.113067   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.113741   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.114150   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.114176   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.119923   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0130 19:26:04.121363   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.121959   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.121977   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.122312   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.122818   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.122853   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.134297   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0130 19:26:04.134808   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.135291   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.135310   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.135933   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.136499   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.136543   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.138692   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0130 19:26:04.139002   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35979
	I0130 19:26:04.139203   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.139698   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.139734   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.139752   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.140214   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.140232   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.140289   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.140725   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.140817   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.140844   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.140931   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.142390   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.144773   12673 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0130 19:26:04.144962   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0130 19:26:04.145125   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0130 19:26:04.145174   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0130 19:26:04.147429   12673 out.go:177]   - Using image docker.io/busybox:stable
	I0130 19:26:04.146199   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0130 19:26:04.148283   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.148298   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.148313   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0130 19:26:04.148337   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0130 19:26:04.148480   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.149714   12673 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 19:26:04.149736   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0130 19:26:04.149755   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.151023   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.151054   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.151867   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.151885   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.151949   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.152056   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.152081   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.152093   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.152121   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.152760   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.152811   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.152833   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.152846   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.152849   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.152995   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.153046   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.153616   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.153678   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.153695   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.154589   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.154669   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.154693   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.155197   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.155233   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.155501   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.155503   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.155725   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:04.156046   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.156078   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.156103   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.156435   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.159005   12673 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0130 19:26:04.156753   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.156796   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.156815   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.157909   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.158096   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0130 19:26:04.160898   12673 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 19:26:04.160915   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 19:26:04.160932   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.160993   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.161014   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.161242   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.161423   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.161989   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.163945   12673 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0130 19:26:04.162207   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.162711   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.163609   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0130 19:26:04.164083   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.164775   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.165456   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0130 19:26:04.165557   12673 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 19:26:04.165577   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0130 19:26:04.165595   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.165564   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.165653   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.165668   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.165786   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.165817   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.165925   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.166021   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.166177   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.166234   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0130 19:26:04.166460   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.166909   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0130 19:26:04.167144   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.167566   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.167577   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.167622   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.167973   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0130 19:26:04.168073   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.168150   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.168155   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.168162   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.170639   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0130 19:26:04.168549   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.170635   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.168680   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.168586   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.169033   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.169169   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.173001   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0130 19:26:04.173021   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0130 19:26:04.169278   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.170971   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.171086   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.171331   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.171463   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.172360   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.172390   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0130 19:26:04.172554   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.173058   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.173132   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.173149   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.174365   12673 out.go:177]   - Using image docker.io/registry:2.8.3
	I0130 19:26:04.174432   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.174479   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.174499   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.174507   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.174724   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.174748   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.175241   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.177969   12673 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0130 19:26:04.176316   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.176388   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.176395   12673 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 19:26:04.176604   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.176654   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.176852   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.176973   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.177413   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.178965   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0130 19:26:04.179774   12673 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0130 19:26:04.179788   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0130 19:26:04.179800   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.179804   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.179810   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.179834   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 19:26:04.179841   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.179861   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.179885   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.180087   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.180629   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.180680   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.180774   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.180804   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.180966   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.180984   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.181149   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.181630   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.181767   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.181779   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.182510   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.183094   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:04.183119   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:04.183314   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.185561   12673 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0130 19:26:04.186921   12673 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0130 19:26:04.186943   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0130 19:26:04.186961   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.185636   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.184755   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.184970   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.187086   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.187109   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.185258   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.185434   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.184469   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.187189   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.187223   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.189320   12673 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0130 19:26:04.187871   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.187910   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.190298   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.190781   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.190960   12673 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 19:26:04.191271   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.191291   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.192999   12673 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0130 19:26:04.194640   12673 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0130 19:26:04.194654   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0130 19:26:04.194673   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.193129   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.194724   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.193144   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0130 19:26:04.194744   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.193465   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.193508   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.193524   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.195985   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.196164   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.198468   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.198937   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.198957   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.199153   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.199345   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.199544   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.199644   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.199995   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.200430   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.200453   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.200617   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.200777   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.200967   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.201033   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0130 19:26:04.201781   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.201804   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.202132   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.202143   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.202181   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I0130 19:26:04.202473   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.202651   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.202705   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.202876   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45977
	I0130 19:26:04.203108   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.203121   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.203418   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.203878   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.203892   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.204246   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.204424   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.204470   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.206944   12673 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0130 19:26:04.205473   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0130 19:26:04.205498   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.206262   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
	I0130 19:26:04.207025   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I0130 19:26:04.209691   12673 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0130 19:26:04.209710   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0130 19:26:04.209730   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.209968   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.209990   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.210076   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.210445   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.210461   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.210555   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.210567   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.211250   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.211275   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.211394   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:04.211417   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.211460   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.212019   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:04.212037   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:04.212395   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:04.212579   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:04.212895   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.214803   12673 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0130 19:26:04.213545   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.213574   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.213999   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.214381   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.214396   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:04.216283   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.216307   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.217523   12673 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:04.216723   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.218929   12673 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:04.219091   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.220296   12673 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 19:26:04.220359   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0130 19:26:04.220330   12673 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0130 19:26:04.220549   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.222008   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0130 19:26:04.223687   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0130 19:26:04.222372   12673 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 19:26:04.222134   12673 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:26:04.222118   12673 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0130 19:26:04.223710   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0130 19:26:04.223727   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.225125   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0130 19:26:04.226458   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0130 19:26:04.224297   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0130 19:26:04.227848   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0130 19:26:04.226497   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.224314   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 19:26:04.227926   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.226561   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.227979   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.228004   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.227129   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.229845   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0130 19:26:04.228549   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.230012   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.231408   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.231711   12673 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0130 19:26:04.233055   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0130 19:26:04.233077   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0130 19:26:04.233094   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:04.231693   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.233161   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.233183   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.231731   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.233202   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.231867   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.231905   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.232415   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.233871   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.233978   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.234033   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.234201   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.234202   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.234453   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:04.235732   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.236133   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:04.236164   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:04.236215   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:04.236383   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:04.236556   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:04.236690   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	W0130 19:26:04.238707   12673 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53228->192.168.39.249:22: read: connection reset by peer
	I0130 19:26:04.238733   12673 retry.go:31] will retry after 250.530815ms: ssh: handshake failed: read tcp 192.168.39.1:53228->192.168.39.249:22: read: connection reset by peer
	W0130 19:26:04.239739   12673 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53272->192.168.39.249:22: read: connection reset by peer
	I0130 19:26:04.239756   12673 retry.go:31] will retry after 218.071197ms: ssh: handshake failed: read tcp 192.168.39.1:53272->192.168.39.249:22: read: connection reset by peer
	I0130 19:26:04.593370   12673 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-444600" context rescaled to 1 replicas
	I0130 19:26:04.593407   12673 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0130 19:26:04.595514   12673 out.go:177] * Verifying Kubernetes components...
	I0130 19:26:04.597120   12673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:26:04.617575   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 19:26:04.728293   12673 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 19:26:04.744673   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 19:26:04.745065   12673 node_ready.go:35] waiting up to 6m0s for node "addons-444600" to be "Ready" ...
	I0130 19:26:04.748488   12673 node_ready.go:49] node "addons-444600" has status "Ready":"True"
	I0130 19:26:04.748507   12673 node_ready.go:38] duration metric: took 3.414218ms waiting for node "addons-444600" to be "Ready" ...
	I0130 19:26:04.748515   12673 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:26:04.756224   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 19:26:04.757955   12673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p8k4q" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:04.806543   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 19:26:04.817082   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 19:26:04.821755   12673 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0130 19:26:04.821776   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0130 19:26:04.824542   12673 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0130 19:26:04.824555   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0130 19:26:04.832474   12673 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0130 19:26:04.832487   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0130 19:26:04.887588   12673 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 19:26:04.887611   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0130 19:26:04.913224   12673 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0130 19:26:04.913248   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0130 19:26:04.916219   12673 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0130 19:26:04.916238   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0130 19:26:04.937952   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 19:26:05.020821   12673 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 19:26:05.020847   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0130 19:26:05.055735   12673 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0130 19:26:05.055758   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0130 19:26:05.106768   12673 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0130 19:26:05.106789   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0130 19:26:05.140486   12673 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 19:26:05.140507   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 19:26:05.160622   12673 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0130 19:26:05.160646   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0130 19:26:05.269870   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0130 19:26:05.311482   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0130 19:26:05.311506   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0130 19:26:05.355704   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 19:26:05.415997   12673 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0130 19:26:05.416020   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0130 19:26:05.453674   12673 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0130 19:26:05.453701   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0130 19:26:05.578469   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0130 19:26:05.599481   12673 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0130 19:26:05.599502   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0130 19:26:05.605823   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0130 19:26:05.605847   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0130 19:26:05.737282   12673 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 19:26:05.737315   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 19:26:05.808099   12673 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0130 19:26:05.808124   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0130 19:26:05.999794   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 19:26:06.020748   12673 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0130 19:26:06.020773   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0130 19:26:06.025379   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0130 19:26:06.025398   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0130 19:26:06.063628   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0130 19:26:06.063649   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0130 19:26:06.121737   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0130 19:26:06.121757   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0130 19:26:06.131983   12673 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0130 19:26:06.132005   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0130 19:26:06.148769   12673 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0130 19:26:06.148803   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0130 19:26:06.175325   12673 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0130 19:26:06.175347   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0130 19:26:06.185318   12673 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:06.185337   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0130 19:26:06.255473   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0130 19:26:06.455803   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0130 19:26:06.455827   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0130 19:26:06.491365   12673 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0130 19:26:06.491390   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0130 19:26:06.518444   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:06.771691   12673 pod_ready.go:102] pod "coredns-5dd5756b68-p8k4q" in "kube-system" namespace has status "Ready":"False"
	I0130 19:26:06.795039   12673 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 19:26:06.795058   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0130 19:26:06.925804   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0130 19:26:06.925826   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0130 19:26:07.269581   12673 pod_ready.go:92] pod "coredns-5dd5756b68-p8k4q" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.269600   12673 pod_ready.go:81] duration metric: took 2.511623722s waiting for pod "coredns-5dd5756b68-p8k4q" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.269608   12673 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zwcsh" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.271645   12673 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zwcsh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zwcsh" not found
	I0130 19:26:07.271663   12673 pod_ready.go:81] duration metric: took 2.049415ms waiting for pod "coredns-5dd5756b68-zwcsh" in "kube-system" namespace to be "Ready" ...
	E0130 19:26:07.271671   12673 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zwcsh" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zwcsh" not found
	I0130 19:26:07.271677   12673 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.277198   12673 pod_ready.go:92] pod "etcd-addons-444600" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.277219   12673 pod_ready.go:81] duration metric: took 5.536725ms waiting for pod "etcd-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.277227   12673 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.284995   12673 pod_ready.go:92] pod "kube-apiserver-addons-444600" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.285009   12673 pod_ready.go:81] duration metric: took 7.776472ms waiting for pod "kube-apiserver-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.285027   12673 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.291336   12673 pod_ready.go:92] pod "kube-controller-manager-addons-444600" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.291362   12673 pod_ready.go:81] duration metric: took 6.32148ms waiting for pod "kube-controller-manager-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.291371   12673 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-krwt6" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.381905   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 19:26:07.462591   12673 pod_ready.go:92] pod "kube-proxy-krwt6" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.462611   12673 pod_ready.go:81] duration metric: took 171.234504ms waiting for pod "kube-proxy-krwt6" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.462621   12673 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.482475   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0130 19:26:07.482497   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0130 19:26:07.847863   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0130 19:26:07.847885   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0130 19:26:07.862756   12673 pod_ready.go:92] pod "kube-scheduler-addons-444600" in "kube-system" namespace has status "Ready":"True"
	I0130 19:26:07.862776   12673 pod_ready.go:81] duration metric: took 400.149745ms waiting for pod "kube-scheduler-addons-444600" in "kube-system" namespace to be "Ready" ...
	I0130 19:26:07.862784   12673 pod_ready.go:38] duration metric: took 3.114260562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 19:26:07.862799   12673 api_server.go:52] waiting for apiserver process to appear ...
	I0130 19:26:07.862842   12673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 19:26:08.034635   12673 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 19:26:08.034656   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0130 19:26:08.182708   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 19:26:08.860507   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.242896242s)
	I0130 19:26:08.860554   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.860567   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:08.860839   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:08.860898   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.860918   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.860938   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:08.860951   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:08.861224   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:08.861276   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:08.861298   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:09.224451   12673 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.496121071s)
	I0130 19:26:09.224484   12673 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 19:26:10.710452   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.965740221s)
	I0130 19:26:10.710510   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:10.710525   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:10.710800   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:10.710820   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:10.710832   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:10.710842   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:10.710845   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:10.711136   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:10.711153   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:10.813168   12673 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0130 19:26:10.813210   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:10.816638   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:10.817113   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:10.817148   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:10.817301   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:10.817503   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:10.817666   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:10.817797   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:11.832487   12673 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0130 19:26:12.205257   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.398684804s)
	I0130 19:26:12.205306   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205320   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205362   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.388259058s)
	I0130 19:26:12.205389   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205397   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205409   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.449146091s)
	I0130 19:26:12.205447   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205475   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205651   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:12.205689   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:12.205711   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.205721   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.205730   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205732   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:12.205738   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205765   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.205780   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.205799   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205813   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205872   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.205889   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.205900   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.205902   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:12.205908   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.205923   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.205931   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.206144   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.206159   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.206372   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:12.206402   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.206410   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.247699   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.247725   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.248106   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.248124   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:12.248295   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:12.248313   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:12.248520   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:12.248542   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	W0130 19:26:12.248626   12673 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0130 19:26:12.340986   12673 addons.go:234] Setting addon gcp-auth=true in "addons-444600"
	I0130 19:26:12.341077   12673 host.go:66] Checking if "addons-444600" exists ...
	I0130 19:26:12.341523   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:12.341574   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:12.356065   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0130 19:26:12.356484   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:12.356972   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:12.356999   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:12.357320   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:12.357801   12673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:26:12.357845   12673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:26:12.373413   12673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0130 19:26:12.373847   12673 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:26:12.374289   12673 main.go:141] libmachine: Using API Version  1
	I0130 19:26:12.374313   12673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:26:12.374600   12673 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:26:12.374777   12673 main.go:141] libmachine: (addons-444600) Calling .GetState
	I0130 19:26:12.376329   12673 main.go:141] libmachine: (addons-444600) Calling .DriverName
	I0130 19:26:12.376543   12673 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0130 19:26:12.376571   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHHostname
	I0130 19:26:12.379460   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:12.379932   12673 main.go:141] libmachine: (addons-444600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:7c:96", ip: ""} in network mk-addons-444600: {Iface:virbr1 ExpiryTime:2024-01-30 20:25:19 +0000 UTC Type:0 Mac:52:54:00:fd:7c:96 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-444600 Clientid:01:52:54:00:fd:7c:96}
	I0130 19:26:12.379959   12673 main.go:141] libmachine: (addons-444600) DBG | domain addons-444600 has defined IP address 192.168.39.249 and MAC address 52:54:00:fd:7c:96 in network mk-addons-444600
	I0130 19:26:12.380125   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHPort
	I0130 19:26:12.380321   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHKeyPath
	I0130 19:26:12.380488   12673 main.go:141] libmachine: (addons-444600) Calling .GetSSHUsername
	I0130 19:26:12.380686   12673 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/addons-444600/id_rsa Username:docker}
	I0130 19:26:16.062983   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.793074837s)
	I0130 19:26:16.063034   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063048   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063049   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.125065173s)
	I0130 19:26:16.063067   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.707303319s)
	I0130 19:26:16.063092   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063110   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063116   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.484604642s)
	I0130 19:26:16.063144   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063092   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063162   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063168   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063186   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.063364397s)
	I0130 19:26:16.063248   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063262   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.80774667s)
	I0130 19:26:16.063291   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063305   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063271   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063611   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.545134025s)
	W0130 19:26:16.063646   12673 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 19:26:16.063669   12673 retry.go:31] will retry after 142.90535ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 19:26:16.063770   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.681790792s)
	I0130 19:26:16.063796   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.063809   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.063872   12673 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.201016719s)
	I0130 19:26:16.063888   12673 api_server.go:72] duration metric: took 11.470451237s to wait for apiserver process to appear ...
	I0130 19:26:16.063895   12673 api_server.go:88] waiting for apiserver healthz status ...
	I0130 19:26:16.063909   12673 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0130 19:26:16.065163   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065170   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065180   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065191   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065200   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065203   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065212   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065220   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065229   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065252   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065276   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065284   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065293   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065301   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065344   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065360   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065375   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065376   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065398   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065407   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065407   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065417   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065423   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065427   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065432   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065436   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065480   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065491   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065500   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065508   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065552   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065560   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065570   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:16.065579   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:16.065693   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065716   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065724   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.065734   12673 addons.go:470] Verifying addon registry=true in "addons-444600"
	I0130 19:26:16.068290   12673 out.go:177] * Verifying registry addon...
	I0130 19:26:16.065865   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065902   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065922   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.065980   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.065981   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.066186   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.066220   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.066340   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.066361   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.066511   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:16.066531   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:16.068324   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.069976   12673 addons.go:470] Verifying addon metrics-server=true in "addons-444600"
	I0130 19:26:16.068335   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.068334   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.068339   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.068346   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.068343   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:16.070606   12673 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0130 19:26:16.071435   12673 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-444600 service yakd-dashboard -n yakd-dashboard
	
	I0130 19:26:16.071459   12673 addons.go:470] Verifying addon ingress=true in "addons-444600"
	I0130 19:26:16.074409   12673 out.go:177] * Verifying ingress addon...
	I0130 19:26:16.076865   12673 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0130 19:26:16.077184   12673 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0130 19:26:16.078617   12673 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0130 19:26:16.078633   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:16.078904   12673 api_server.go:141] control plane version: v1.28.4
	I0130 19:26:16.078920   12673 api_server.go:131] duration metric: took 15.019735ms to wait for apiserver health ...
	I0130 19:26:16.078926   12673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 19:26:16.080355   12673 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0130 19:26:16.080369   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:16.092791   12673 system_pods.go:59] 15 kube-system pods found
	I0130 19:26:16.092816   12673 system_pods.go:61] "coredns-5dd5756b68-p8k4q" [01d400f5-eb85-4ecf-a930-087d56976132] Running
	I0130 19:26:16.092821   12673 system_pods.go:61] "etcd-addons-444600" [6c44fc86-4038-45e1-87ce-bb410ae9b51f] Running
	I0130 19:26:16.092825   12673 system_pods.go:61] "kube-apiserver-addons-444600" [74ba8084-53b7-4d42-ba7a-8914fa29a1b9] Running
	I0130 19:26:16.092829   12673 system_pods.go:61] "kube-controller-manager-addons-444600" [df67ef30-3de4-40ab-ad3b-d0c7dba754f8] Running
	I0130 19:26:16.092835   12673 system_pods.go:61] "kube-ingress-dns-minikube" [0773158f-ede2-4383-9fbf-cabf74ba544e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0130 19:26:16.092839   12673 system_pods.go:61] "kube-proxy-krwt6" [e16cf2bd-2675-416a-9332-0ab56752228f] Running
	I0130 19:26:16.092844   12673 system_pods.go:61] "kube-scheduler-addons-444600" [2686cc0d-dc0d-4212-9416-3c1f625a45b5] Running
	I0130 19:26:16.092850   12673 system_pods.go:61] "metrics-server-7c66d45ddc-lslrk" [a45f01f1-c37f-4e4f-8f61-7aa191b86125] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 19:26:16.092859   12673 system_pods.go:61] "nvidia-device-plugin-daemonset-2tfw7" [fe8a7fc9-87ff-4886-b717-8fea5a316403] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0130 19:26:16.092868   12673 system_pods.go:61] "registry-proxy-bsjjs" [5f9a3c47-9f01-4b1f-b10f-9da8dc70fcf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0130 19:26:16.092876   12673 system_pods.go:61] "registry-x98sf" [8d3a9cf6-39e5-437f-bda5-4dd87d2ca039] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0130 19:26:16.092894   12673 system_pods.go:61] "snapshot-controller-58dbcc7b99-9n26q" [06654fe9-6bd8-4091-9101-c348f29e13ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 19:26:16.092904   12673 system_pods.go:61] "snapshot-controller-58dbcc7b99-xsmhd" [ecb54ac3-21a4-4bc3-8599-e899c36a9f84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 19:26:16.092911   12673 system_pods.go:61] "storage-provisioner" [ecb66b73-3e14-4bef-b527-48bb1b755ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 19:26:16.092921   12673 system_pods.go:61] "tiller-deploy-7b677967b9-kkzmv" [39af6ffd-8c53-49cd-9630-cf1724428289] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0130 19:26:16.092927   12673 system_pods.go:74] duration metric: took 13.996103ms to wait for pod list to return data ...
	I0130 19:26:16.092937   12673 default_sa.go:34] waiting for default service account to be created ...
	I0130 19:26:16.096142   12673 default_sa.go:45] found service account: "default"
	I0130 19:26:16.096158   12673 default_sa.go:55] duration metric: took 3.216233ms for default service account to be created ...
	I0130 19:26:16.096164   12673 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 19:26:16.110828   12673 system_pods.go:86] 15 kube-system pods found
	I0130 19:26:16.110852   12673 system_pods.go:89] "coredns-5dd5756b68-p8k4q" [01d400f5-eb85-4ecf-a930-087d56976132] Running
	I0130 19:26:16.110857   12673 system_pods.go:89] "etcd-addons-444600" [6c44fc86-4038-45e1-87ce-bb410ae9b51f] Running
	I0130 19:26:16.110861   12673 system_pods.go:89] "kube-apiserver-addons-444600" [74ba8084-53b7-4d42-ba7a-8914fa29a1b9] Running
	I0130 19:26:16.110865   12673 system_pods.go:89] "kube-controller-manager-addons-444600" [df67ef30-3de4-40ab-ad3b-d0c7dba754f8] Running
	I0130 19:26:16.110873   12673 system_pods.go:89] "kube-ingress-dns-minikube" [0773158f-ede2-4383-9fbf-cabf74ba544e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0130 19:26:16.110904   12673 system_pods.go:89] "kube-proxy-krwt6" [e16cf2bd-2675-416a-9332-0ab56752228f] Running
	I0130 19:26:16.110910   12673 system_pods.go:89] "kube-scheduler-addons-444600" [2686cc0d-dc0d-4212-9416-3c1f625a45b5] Running
	I0130 19:26:16.110916   12673 system_pods.go:89] "metrics-server-7c66d45ddc-lslrk" [a45f01f1-c37f-4e4f-8f61-7aa191b86125] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 19:26:16.110926   12673 system_pods.go:89] "nvidia-device-plugin-daemonset-2tfw7" [fe8a7fc9-87ff-4886-b717-8fea5a316403] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0130 19:26:16.110932   12673 system_pods.go:89] "registry-proxy-bsjjs" [5f9a3c47-9f01-4b1f-b10f-9da8dc70fcf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0130 19:26:16.110944   12673 system_pods.go:89] "registry-x98sf" [8d3a9cf6-39e5-437f-bda5-4dd87d2ca039] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0130 19:26:16.110951   12673 system_pods.go:89] "snapshot-controller-58dbcc7b99-9n26q" [06654fe9-6bd8-4091-9101-c348f29e13ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 19:26:16.110959   12673 system_pods.go:89] "snapshot-controller-58dbcc7b99-xsmhd" [ecb54ac3-21a4-4bc3-8599-e899c36a9f84] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 19:26:16.110965   12673 system_pods.go:89] "storage-provisioner" [ecb66b73-3e14-4bef-b527-48bb1b755ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 19:26:16.110971   12673 system_pods.go:89] "tiller-deploy-7b677967b9-kkzmv" [39af6ffd-8c53-49cd-9630-cf1724428289] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0130 19:26:16.110980   12673 system_pods.go:126] duration metric: took 14.798368ms to wait for k8s-apps to be running ...
	I0130 19:26:16.110988   12673 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 19:26:16.111030   12673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:26:16.207635   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 19:26:16.577377   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:16.582303   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:17.089704   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:17.094858   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:17.578649   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:17.582371   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:18.078368   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:18.081451   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:18.576078   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:18.587272   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:18.600747   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.417981975s)
	I0130 19:26:18.600773   12673 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.224207962s)
	I0130 19:26:18.600806   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:18.600821   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:18.600828   12673 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.489765654s)
	I0130 19:26:18.602583   12673 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 19:26:18.600852   12673 system_svc.go:56] duration metric: took 2.489860343s WaitForService to wait for kubelet.
	I0130 19:26:18.601105   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:18.601129   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:18.604201   12673 kubeadm.go:581] duration metric: took 14.010759365s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 19:26:18.604229   12673 node_conditions.go:102] verifying NodePressure condition ...
	I0130 19:26:18.604228   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:18.604257   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:18.604271   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:18.605709   12673 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0130 19:26:18.604511   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:18.604532   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:18.607524   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:18.607542   12673 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0130 19:26:18.607552   12673 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-444600"
	I0130 19:26:18.607554   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0130 19:26:18.609237   12673 out.go:177] * Verifying csi-hostpath-driver addon...
	I0130 19:26:18.611574   12673 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0130 19:26:18.627887   12673 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 19:26:18.627910   12673 node_conditions.go:123] node cpu capacity is 2
	I0130 19:26:18.627920   12673 node_conditions.go:105] duration metric: took 23.687118ms to run NodePressure ...
	I0130 19:26:18.627930   12673 start.go:228] waiting for startup goroutines ...
	I0130 19:26:18.668326   12673 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0130 19:26:18.668351   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:18.703806   12673 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0130 19:26:18.703834   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0130 19:26:18.851827   12673 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 19:26:18.851850   12673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0130 19:26:18.885611   12673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 19:26:19.107017   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:19.110892   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:19.122663   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:19.389228   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.18154382s)
	I0130 19:26:19.389276   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:19.389293   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:19.389607   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:19.389629   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:19.389639   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:19.389649   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:19.389892   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:19.389912   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:19.389918   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:19.576266   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:19.580928   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:19.624419   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:20.076666   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:20.080477   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:20.118154   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:20.554900   12673 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.669248506s)
	I0130 19:26:20.554951   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:20.554969   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:20.555307   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:20.555332   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:20.555339   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:20.555348   12673 main.go:141] libmachine: Making call to close driver server
	I0130 19:26:20.555359   12673 main.go:141] libmachine: (addons-444600) Calling .Close
	I0130 19:26:20.555573   12673 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:26:20.555601   12673 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:26:20.555615   12673 main.go:141] libmachine: (addons-444600) DBG | Closing plugin on server side
	I0130 19:26:20.557841   12673 addons.go:470] Verifying addon gcp-auth=true in "addons-444600"
	I0130 19:26:20.559898   12673 out.go:177] * Verifying gcp-auth addon...
	I0130 19:26:20.562595   12673 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0130 19:26:20.577857   12673 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0130 19:26:20.577882   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:20.594907   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:20.602613   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:20.624560   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:21.068041   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:21.076784   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:21.081350   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:21.118753   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:21.567183   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:21.575814   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:21.581097   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:21.618182   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:22.067601   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:22.077040   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:22.080960   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:22.117680   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:22.566530   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:22.583854   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:22.586256   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:22.617510   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:23.067673   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:23.076628   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:23.080667   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:23.117411   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:23.566543   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:23.577620   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:23.580559   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:23.617060   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:24.070809   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:24.076003   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:24.082090   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:24.118208   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:24.750952   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:24.751077   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:24.751905   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:24.754563   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:25.066543   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:25.076714   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:25.081064   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:25.122837   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:25.566809   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:25.575817   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:25.581482   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:25.617628   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:26.067798   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:26.076851   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:26.087549   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:26.117308   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:26.567584   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:26.578173   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:26.581357   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:26.617722   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:27.067730   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:27.077092   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:27.084990   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:27.117881   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:27.567238   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:27.583734   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:27.584304   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:27.618313   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:28.067944   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:28.079340   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:28.081722   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:28.120235   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:28.567969   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:28.577243   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:28.583898   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:28.617874   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:29.066749   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:29.077672   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:29.081671   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:29.117243   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:29.567107   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:29.576415   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:29.581834   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:29.618350   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:30.067346   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:30.077132   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:30.081295   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:30.127124   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:30.568551   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:30.577663   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:30.582472   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:30.619313   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:31.066105   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:31.075795   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:31.081027   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:31.117539   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:31.566106   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:31.576205   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:31.583164   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:31.616836   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:32.066540   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:32.076533   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:32.080986   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:32.131794   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:32.567776   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:32.576003   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:32.583844   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:32.617835   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:33.067126   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:33.078324   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:33.081336   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:33.118787   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:33.567770   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:33.577953   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:33.580610   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:33.642886   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:34.242920   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:34.243237   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:34.244047   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:34.244249   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:34.566962   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:34.576574   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:34.581255   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:34.617659   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:35.067006   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:35.080053   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:35.083160   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:35.117401   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:35.567992   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:35.577126   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:35.591404   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:35.617458   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:36.067890   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:36.082442   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:36.087747   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:36.117745   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:36.759721   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:36.760045   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:36.760229   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:36.763052   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:37.067281   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:37.076250   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:37.081553   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:37.116957   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:37.566459   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:37.576505   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:37.581360   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:37.618081   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:38.067671   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:38.077025   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:38.080632   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:38.118402   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:38.566191   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:38.581498   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:38.582182   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:38.617914   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:39.066633   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:39.077548   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:39.080842   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:39.117583   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:39.567185   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:39.576161   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:39.583093   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:39.617604   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:40.066903   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:40.075936   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:40.081599   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:40.117400   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:40.566750   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:40.576138   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:40.581757   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:40.617388   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:41.068528   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:41.077308   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:41.081380   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:41.121767   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:41.567094   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:41.576032   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:41.581220   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:41.616768   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:42.067818   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:42.076360   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:42.081600   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:42.121232   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:42.566669   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:42.576267   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:42.580503   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:42.618046   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:43.071682   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:43.089208   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:43.089752   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:43.117376   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:43.566784   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:43.577046   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:43.581407   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:43.617863   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:44.066911   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:44.075968   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:44.081006   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:44.118287   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:44.567345   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:44.576555   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:44.583448   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:44.634413   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:45.069167   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:45.076424   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:45.085064   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:45.122128   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:45.566270   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:45.581407   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:45.587860   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:45.618733   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:46.074361   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:46.089650   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:46.094428   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:46.128255   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:46.567119   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:46.583403   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:46.583534   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:46.619065   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:47.068318   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:47.088403   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:47.089009   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:47.117648   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:47.567399   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:47.577409   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:47.580777   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:47.618013   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:48.067459   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:48.078677   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:48.082105   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:48.118352   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:48.566377   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:48.577547   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:48.581187   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:48.618300   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:49.066643   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:49.076976   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:49.081245   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:49.119084   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:49.567556   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:49.577598   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:49.580961   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:49.618049   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:50.398024   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:50.399083   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:50.399107   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:50.400473   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:50.566492   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:50.576931   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:50.580601   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:50.618442   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:51.066826   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:51.077539   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:51.080988   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:51.117750   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:51.567295   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:51.576491   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:51.580837   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:51.618149   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:52.067431   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:52.076856   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:52.081204   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:52.118940   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:52.569739   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:52.581290   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:52.582687   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:52.617749   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:53.067748   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:53.077226   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:53.080852   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:53.117649   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:53.566685   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:53.575930   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:53.584071   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:53.618082   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:54.066862   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:54.076918   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:54.080695   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:54.118629   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:54.567096   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:54.576597   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:54.581317   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:54.617090   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:55.066774   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:55.077885   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:55.084414   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:55.126595   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:55.567088   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:55.576525   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:55.580649   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:55.617526   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:56.067555   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:56.076926   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:56.080528   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:56.121999   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:56.566153   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:56.580951   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:56.582569   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:56.618161   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:57.074470   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:57.076610   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:57.080810   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:57.117633   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:57.566801   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:57.576068   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:57.581537   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:57.617571   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:58.067915   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:58.077011   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:58.080952   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:58.117625   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:58.566978   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:58.576111   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:58.581350   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:58.617387   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:59.066737   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:59.076128   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:59.081205   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:59.118785   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:26:59.566265   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:26:59.578642   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:26:59.582863   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:26:59.617780   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:00.066794   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:00.075979   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:00.081601   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:00.119402   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:00.567452   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:00.576865   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:00.581054   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:00.618017   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:01.066745   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:01.075728   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:01.080913   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:01.120110   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:01.566498   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:01.577111   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:01.580806   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:01.617935   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:02.067839   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:02.077244   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:02.082836   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:02.118395   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:02.567037   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:02.576190   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:02.583330   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:02.619105   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:03.067417   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:03.076807   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:03.085474   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:03.120057   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:03.566931   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:03.575626   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:03.580850   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:03.618100   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:04.069722   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:04.075836   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:04.080827   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:04.117855   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:04.566708   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:04.576945   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:04.580925   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:04.617846   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:05.067523   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:05.076654   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:05.083215   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:05.119211   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:05.566286   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:05.577693   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:05.580770   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:05.617914   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:06.067596   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:06.076610   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:06.081469   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:06.117481   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:06.567179   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:06.576932   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:06.581189   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:06.617201   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:07.069453   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:07.081416   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:07.082965   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:07.119030   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:07.566696   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:07.577486   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:07.580908   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:07.621100   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:08.066836   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:08.077422   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:08.083725   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:08.118105   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:08.567485   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:08.576984   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:08.581203   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:08.617269   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:09.067268   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:09.077215   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:09.082538   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:09.117310   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:09.567402   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:09.577593   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:09.580746   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:09.617909   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:10.066738   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:10.075849   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:10.081401   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:10.118861   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:10.567839   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:10.577969   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:10.582095   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:10.619965   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:11.067266   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:11.076533   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:11.080725   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:11.117597   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:11.567599   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:11.577057   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:11.581259   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:11.618564   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:12.066554   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:12.076824   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:12.081333   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:12.122026   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:12.568521   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:12.579191   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:12.582706   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:12.617191   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:13.279787   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:13.280985   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:13.283481   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:13.284409   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:13.567176   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:13.577193   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:13.586855   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:13.618241   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:14.066309   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:14.076484   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:14.081267   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:14.122833   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:14.567311   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:14.589068   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:14.591558   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:14.622114   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:15.066831   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:15.075803   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:15.080691   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:15.116742   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:15.567288   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:15.576614   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:15.580666   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:15.617937   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:16.067792   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:16.076211   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:16.082686   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:16.125443   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:16.566789   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:16.579322   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:16.582986   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:16.619834   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:17.068970   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:17.080329   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:17.085781   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:17.118218   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:17.567170   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:17.577512   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:17.580870   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:17.631485   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:18.066527   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:18.077257   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:18.080603   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:18.116983   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:18.567268   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:18.576623   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:18.583230   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:18.621944   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:19.066977   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:19.081949   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:19.082808   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:19.118372   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:19.566485   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:19.579217   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:19.582935   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:19.617984   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:20.065624   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:20.076321   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:20.080580   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:20.117147   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:20.566892   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:20.577278   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:20.582689   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:20.617212   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:21.067126   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:21.076777   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:21.081806   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:21.119373   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:21.566717   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:21.577018   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:21.580430   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:21.616603   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:22.068476   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:22.077012   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:22.081364   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:22.117353   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:22.566486   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:22.576382   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:22.581896   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:22.620485   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:23.067538   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:23.078546   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:23.081712   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:23.126774   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:23.584728   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:23.585559   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:23.586419   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:23.617669   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:24.066552   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:24.076989   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:24.080313   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:24.116705   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:24.566238   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:24.576696   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:24.580748   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:24.617898   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:25.067410   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:25.076206   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:25.080996   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:25.120141   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:25.566836   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:25.580723   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:25.582467   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:25.617492   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:26.067655   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:26.075786   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:26.082725   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:26.119241   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:26.566492   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:26.576319   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:26.580226   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:26.617234   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:27.071678   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:27.076382   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:27.081431   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:27.118888   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:27.569482   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:27.577031   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:27.580902   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:27.617380   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:28.066887   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:28.081053   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:28.084066   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:28.118512   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:28.566001   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:28.576120   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:28.581440   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:28.617041   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:29.068098   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:29.076021   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:29.081556   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:29.117742   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:29.567448   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:29.576850   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:29.582304   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:29.618136   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:30.067032   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:30.077779   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:30.081433   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:30.117660   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:30.567096   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:30.577187   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:30.581419   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:30.618982   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:31.068430   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:31.084878   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:31.086465   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 19:27:31.117412   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:31.566791   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:31.577089   12673 kapi.go:107] duration metric: took 1m15.506480337s to wait for kubernetes.io/minikube-addons=registry ...
	I0130 19:27:31.580729   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:31.617770   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:32.067138   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:32.082106   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:32.120214   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:32.570529   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:32.582467   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:32.619185   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:33.067171   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:33.081810   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:33.117578   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:33.568197   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:33.592037   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:33.623051   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:34.068682   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:34.081985   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:34.118196   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:34.574394   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:34.582696   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:34.621643   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:35.066867   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:35.082948   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:35.117745   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:35.567358   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:35.583480   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:35.618562   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:36.067759   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:36.082718   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:36.118251   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:36.566207   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:36.581256   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:36.616856   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:37.067639   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:37.082510   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:37.117272   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:37.567712   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:37.582699   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:37.639567   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:38.067162   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:38.080922   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:38.118303   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:38.572492   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:38.581414   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:38.619258   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:39.077721   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:39.084036   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:39.124036   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:39.567325   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:39.582710   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:39.617958   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:40.067634   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:40.081690   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:40.118319   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:40.566386   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:40.583434   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:40.617765   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:41.067278   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:41.081630   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:41.118949   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 19:27:41.566850   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:41.581619   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:41.617208   12673 kapi.go:107] duration metric: took 1m23.005635507s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0130 19:27:42.067595   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:42.084327   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:42.567172   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:42.581210   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:43.067046   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:43.082859   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:43.567485   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:43.581233   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:44.066856   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:44.082061   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:44.567726   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:44.581611   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:45.067640   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:45.084533   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:45.569191   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:45.581083   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:46.067146   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:46.081094   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:46.567135   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:46.581515   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:47.068444   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:47.081640   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:47.567855   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:47.582202   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:48.066866   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:48.081888   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:48.567363   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:48.581642   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:49.067258   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:49.082278   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:49.566872   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:49.584518   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:50.066855   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:50.082085   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:50.567560   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:50.582345   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:51.066747   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:51.082160   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:51.658234   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:51.659193   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:52.069033   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:52.084066   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:52.566668   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:52.581686   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:53.068687   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:53.082173   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:53.566586   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:53.582191   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:54.066578   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:54.083192   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:54.566794   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:54.582188   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:55.067084   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:55.082000   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:55.566783   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:55.581359   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:56.067047   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:56.082219   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:56.567384   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:56.581793   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:57.067091   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:57.082234   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:57.566748   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:57.582003   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:58.066202   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:58.083214   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:58.567249   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:58.582323   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:59.067071   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:59.082154   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:27:59.566770   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:27:59.582072   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:00.066544   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:00.081967   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:00.566881   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:00.582601   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:01.067605   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:01.081492   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:01.567185   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:01.580936   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:02.068582   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:02.085037   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:02.567411   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:02.581352   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:03.066966   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:03.081706   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:03.567257   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:03.581520   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:04.067708   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:04.081656   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:04.567447   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:04.582046   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:05.066562   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:05.081800   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:05.567647   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:05.582436   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:06.066517   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:06.081180   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:06.567116   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:06.582516   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:07.070875   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:07.082038   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:07.566952   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:07.582312   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:08.067993   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:08.082658   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:08.566599   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:08.581356   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:09.066286   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:09.081416   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:09.567422   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:09.581739   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:10.066514   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:10.081856   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:10.568205   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:10.581410   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:11.067433   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:11.081196   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:11.567315   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:11.581630   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:12.067004   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:12.083823   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:12.567226   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:12.581500   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:13.067525   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:13.081489   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:13.567425   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:13.582291   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:14.066661   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:14.081900   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:14.567816   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:14.582201   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:15.066861   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:15.082777   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:15.569056   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:15.581982   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:16.066570   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:16.081538   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:16.567442   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:16.581407   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:17.068255   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:17.081256   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:17.566600   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:17.581697   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:18.068050   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:18.081840   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:18.567528   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:18.582740   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:19.067537   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:19.081364   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:19.567272   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:19.581469   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:20.066408   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:20.080970   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:20.567439   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:20.581110   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:21.066705   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:21.081706   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:21.567803   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:21.581892   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:22.066778   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:22.081973   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:22.566695   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:22.582815   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:23.066486   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:23.081543   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:23.567072   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:23.586065   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:24.066447   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:24.082224   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:24.567531   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:24.581612   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:25.067556   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:25.081827   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:25.567177   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:25.581127   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:26.066925   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:26.083638   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:26.566528   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:26.581377   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:27.067823   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:27.081762   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:27.567786   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:27.582270   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:28.067996   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:28.082326   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:28.567238   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:28.581679   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:29.067124   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:29.082853   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:29.566752   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:29.581965   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:30.069603   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:30.081315   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:30.567170   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:30.582204   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:31.066613   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:31.083744   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:31.566898   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:31.582048   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:32.065887   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:32.082165   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:32.566755   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:32.581801   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:33.067019   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:33.084882   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:33.567391   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:33.581720   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:34.068842   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:34.085832   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:34.568212   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:34.581476   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:35.066596   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:35.081594   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:35.567110   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:35.581629   12673 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 19:28:36.067605   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:36.081846   12673 kapi.go:107] duration metric: took 2m20.004977659s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0130 19:28:36.567189   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:37.068949   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:37.567941   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:38.066978   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:38.567818   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:39.066856   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:39.567656   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:40.066247   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:40.567394   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:41.067828   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:41.567666   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:42.066320   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:42.567288   12673 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 19:28:43.067300   12673 kapi.go:107] duration metric: took 2m22.504705156s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0130 19:28:43.069395   12673 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-444600 cluster.
	I0130 19:28:43.070999   12673 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0130 19:28:43.072519   12673 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0130 19:28:43.073950   12673 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, helm-tiller, inspektor-gadget, cloud-spanner, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0130 19:28:43.075373   12673 addons.go:505] enable addons completed in 2m39.020130234s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner storage-provisioner-rancher metrics-server helm-tiller inspektor-gadget cloud-spanner yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0130 19:28:43.075410   12673 start.go:233] waiting for cluster config update ...
	I0130 19:28:43.075435   12673 start.go:242] writing updated cluster config ...
	I0130 19:28:43.075666   12673 ssh_runner.go:195] Run: rm -f paused
	I0130 19:28:43.128537   12673 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 19:28:43.130100   12673 out.go:177] * Done! kubectl is now configured to use "addons-444600" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	e92ae59ea7978       3f57d9401f8d4       3 seconds ago        Exited              busybox                                  0                   857adbe1f5ba0       test-local-path
	21f2a96e7d0f3       6d2a98b274382       14 seconds ago       Running             gcp-auth                                 0                   c6cdc590dca03       gcp-auth-d4c87556c-q4ntb
	e6473d02cbe98       311f90a3747fd       21 seconds ago       Running             controller                               0                   f4fe6fbdc76cc       ingress-nginx-controller-69cff4fd79-zrxdd
	836cfe9ce7d1e       6649c59aa66d0       35 seconds ago       Exited              gadget                                   3                   a9d39c3c7f910       gadget-d8rvs
	9a90e5bd72522       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	498f33adbd3c8       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	59e97dc35c722       e899260153aed       About a minute ago   Running             liveness-probe                           0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	b6a06046fd68d       e255e073c508c       About a minute ago   Running             hostpath                                 0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	18a60975511de       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	ec825b4b3565a       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   79f0e04388f4e       csi-hostpathplugin-h6w5m
	420752d17bf8d       1ebff0f9671bc       About a minute ago   Exited              patch                                    0                   1adeb1c174a13       ingress-nginx-admission-patch-dgs9l
	71a7cdfcc920b       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   27952618061b2       csi-hostpath-resizer-0
	2dfe227e20f52       d2fd211e7dcaa       About a minute ago   Running             registry-proxy                           0                   204717e0a4096       registry-proxy-bsjjs
	3c3611954a573       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   54fb700e73287       csi-hostpath-attacher-0
	48eb796194dcb       1ebff0f9671bc       About a minute ago   Exited              create                                   0                   3b1d939a6089b       ingress-nginx-admission-create-gc5z2
	df2fe23142561       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   ed252e3515d3f       local-path-provisioner-78b46b4d5c-8jtb4
	be765bcef4e55       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   ad0946fa90496       snapshot-controller-58dbcc7b99-9n26q
	b59fb1cc2664b       31de47c733c91       About a minute ago   Running             yakd                                     0                   3c8c4d94386e2       yakd-dashboard-9947fc6bf-5s9nq
	8cd612a793207       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   818236c0f5872       snapshot-controller-58dbcc7b99-xsmhd
	39dbd9946a21d       3f39089e90831       2 minutes ago        Running             tiller                                   0                   99b8e75638f2c       tiller-deploy-7b677967b9-kkzmv
	473f1910dba78       909c3ff012b7f       2 minutes ago        Running             registry                                 0                   64507ff96fb0a       registry-x98sf
	e56a231ac6899       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   1bf52facc5c82       kube-ingress-dns-minikube
	4cc468232d184       8cfc3f994a82b       2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   804beef057614       nvidia-device-plugin-daemonset-2tfw7
	54277e83cf20f       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   52a9081745d37       storage-provisioner
	ad1498343e0ae       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   078a73d6af3f7       coredns-5dd5756b68-p8k4q
	054d8faeaed43       83f6cc407eed8       2 minutes ago        Running             kube-proxy                               0                   cf05b98349f85       kube-proxy-krwt6
	61d073cfb2c73       73deb9a3f7025       3 minutes ago        Running             etcd                                     0                   a6ec3f4d7ce8d       etcd-addons-444600
	4985d47af422b       d058aa5ab969c       3 minutes ago        Running             kube-controller-manager                  0                   149c1e0f38282       kube-controller-manager-addons-444600
	9ca0ebe50e27f       e3db313c6dbc0       3 minutes ago        Running             kube-scheduler                           0                   354152e777299       kube-scheduler-addons-444600
	3d7508efea742       7fe0e6f37db33       3 minutes ago        Running             kube-apiserver                           0                   bce6565acc5ba       kube-apiserver-addons-444600
	
	
	==> containerd <==
	-- Journal begins at Tue 2024-01-30 19:25:16 UTC, ends at Tue 2024-01-30 19:28:56 UTC. --
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.839139000Z" level=warning msg="cleanup warnings time=\"2024-01-30T19:28:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.850551907Z" level=info msg="StopContainer for \"ec405f12e2b0e27a0915cda16d8d70df4e76807b51bc9916f6fb93ee7c8c7c23\" returns successfully"
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.851378961Z" level=info msg="StopPodSandbox for \"b024573540b3cf997eb71e3b4f387607136d8efd79c61ec0d10371c3b8089827\""
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.851877613Z" level=info msg="Container to stop \"ec405f12e2b0e27a0915cda16d8d70df4e76807b51bc9916f6fb93ee7c8c7c23\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.939721922Z" level=info msg="StopPodSandbox for \"857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4\""
	Jan 30 19:28:54 addons-444600 containerd[686]: time="2024-01-30T19:28:54.941222133Z" level=info msg="Container to stop \"e92ae59ea79781640175bf39fa95aae337ca21a88feef2263805aa4a862e3a89\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.048242430Z" level=info msg="shim disconnected" id=b024573540b3cf997eb71e3b4f387607136d8efd79c61ec0d10371c3b8089827 namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.048324264Z" level=warning msg="cleaning up after shim disconnected" id=b024573540b3cf997eb71e3b4f387607136d8efd79c61ec0d10371c3b8089827 namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.048336816Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.106303154Z" level=info msg="shim disconnected" id=857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4 namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.106392193Z" level=warning msg="cleaning up after shim disconnected" id=857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4 namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.106407433Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.199839167Z" level=info msg="TearDown network for sandbox \"b024573540b3cf997eb71e3b4f387607136d8efd79c61ec0d10371c3b8089827\" successfully"
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.199956466Z" level=info msg="StopPodSandbox for \"b024573540b3cf997eb71e3b4f387607136d8efd79c61ec0d10371c3b8089827\" returns successfully"
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.294966745Z" level=info msg="TearDown network for sandbox \"857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4\" successfully"
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.295033673Z" level=info msg="StopPodSandbox for \"857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4\" returns successfully"
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.506251598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:registry-test,Uid:796163b8-5b10-48ce-a033-8319543ac154,Namespace:default,Attempt:0,} returns sandbox id \"1c1de462c31d946a41553fa458875b429b20531e455b5a6deab0e6f705c15d96\""
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.517993055Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:latest\""
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.951022920Z" level=info msg="RemoveContainer for \"ec405f12e2b0e27a0915cda16d8d70df4e76807b51bc9916f6fb93ee7c8c7c23\""
	Jan 30 19:28:55 addons-444600 containerd[686]: time="2024-01-30T19:28:55.971521424Z" level=info msg="RemoveContainer for \"ec405f12e2b0e27a0915cda16d8d70df4e76807b51bc9916f6fb93ee7c8c7c23\" returns successfully"
	Jan 30 19:28:56 addons-444600 containerd[686]: time="2024-01-30T19:28:56.372980040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353,Uid:02df0124-e892-4751-b535-fd36aebb5505,Namespace:local-path-storage,Attempt:0,}"
	Jan 30 19:28:56 addons-444600 containerd[686]: time="2024-01-30T19:28:56.526306881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 30 19:28:56 addons-444600 containerd[686]: time="2024-01-30T19:28:56.527686501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 30 19:28:56 addons-444600 containerd[686]: time="2024-01-30T19:28:56.529552745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 30 19:28:56 addons-444600 containerd[686]: time="2024-01-30T19:28:56.529680792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> coredns [ad1498343e0aecdacda0e319f44979743e9c77af6e33c981042672efd8143cbe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55193 - 44474 "HINFO IN 1975636701465930788.2922676970776055564. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013740204s
	[INFO] 10.244.0.21:49506 - 33465 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000480873s
	[INFO] 10.244.0.21:34852 - 42907 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106387s
	[INFO] 10.244.0.21:57319 - 6844 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103876s
	[INFO] 10.244.0.21:35915 - 15763 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168456s
	[INFO] 10.244.0.21:58819 - 65520 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007055s
	[INFO] 10.244.0.21:50866 - 3321 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057648s
	[INFO] 10.244.0.21:43839 - 34019 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000598176s
	[INFO] 10.244.0.21:43338 - 56244 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.000463641s
	
	
	==> describe nodes <==
	Name:               addons-444600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-444600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=addons-444600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T19_25_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-444600
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-444600"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 19:25:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-444600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 19:28:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 19:28:55 +0000   Tue, 30 Jan 2024 19:25:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 19:28:55 +0000   Tue, 30 Jan 2024 19:25:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 19:28:55 +0000   Tue, 30 Jan 2024 19:25:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 19:28:55 +0000   Tue, 30 Jan 2024 19:25:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    addons-444600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 80b84a7d69844dd3a2f0f7d05cbf7b4e
	  System UUID:                80b84a7d-6984-4dd3-a2f0-f7d05cbf7b4e
	  Boot ID:                    37ecaf29-a82c-4ba0-9070-19249081f2b6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     registry-test                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  gadget                      gadget-d8rvs                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  gcp-auth                    gcp-auth-d4c87556c-q4ntb                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-zrxdd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-5dd5756b68-p8k4q                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m53s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 csi-hostpathplugin-h6w5m                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 etcd-addons-444600                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-apiserver-addons-444600                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-controller-manager-addons-444600                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-proxy-krwt6                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-scheduler-addons-444600                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 nvidia-device-plugin-daemonset-2tfw7                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 registry-proxy-bsjjs                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 registry-x98sf                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 snapshot-controller-58dbcc7b99-9n26q                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 snapshot-controller-58dbcc7b99-xsmhd                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 tiller-deploy-7b677967b9-kkzmv                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  local-path-storage          helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-8jtb4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-5s9nq                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m51s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m14s (x8 over 3m15s)  kubelet          Node addons-444600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x8 over 3m15s)  kubelet          Node addons-444600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x7 over 3m15s)  kubelet          Node addons-444600 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m6s                   kubelet          Node addons-444600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s                   kubelet          Node addons-444600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s                   kubelet          Node addons-444600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node addons-444600 status is now: NodeReady
	  Normal  RegisteredNode           2m54s                  node-controller  Node addons-444600 event: Registered Node addons-444600 in Controller
	
	
	==> dmesg <==
	[  +4.502813] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.392149] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148456] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.997144] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.080812] systemd-fstab-generator[553]: Ignoring "noauto" for root device
	[  +0.107305] systemd-fstab-generator[564]: Ignoring "noauto" for root device
	[  +0.157757] systemd-fstab-generator[577]: Ignoring "noauto" for root device
	[  +0.117131] systemd-fstab-generator[588]: Ignoring "noauto" for root device
	[  +0.232810] systemd-fstab-generator[615]: Ignoring "noauto" for root device
	[  +6.487424] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +4.987855] systemd-fstab-generator[839]: Ignoring "noauto" for root device
	[  +9.238559] systemd-fstab-generator[1197]: Ignoring "noauto" for root device
	[Jan30 19:26] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.079226] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.079227] kauditd_printk_skb: 23 callbacks suppressed
	[ +12.852393] kauditd_printk_skb: 6 callbacks suppressed
	[Jan30 19:27] kauditd_printk_skb: 22 callbacks suppressed
	[ +19.874564] kauditd_printk_skb: 32 callbacks suppressed
	[Jan30 19:28] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.031536] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.173028] kauditd_printk_skb: 3 callbacks suppressed
	[ +18.768058] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [61d073cfb2c73b712bbfc8da7e5f7fd579c4568bf965473097bdd6a35a9aabbd] <==
	{"level":"info","ts":"2024-01-30T19:26:50.391081Z","caller":"traceutil/trace.go:171","msg":"trace[1976848032] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:943; }","duration":"319.547037ms","start":"2024-01-30T19:26:50.071529Z","end":"2024-01-30T19:26:50.391076Z","steps":["trace[1976848032] 'agreement among raft nodes before linearized reading'  (duration: 319.428687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:26:50.391085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.261479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81700"}
	{"level":"warn","ts":"2024-01-30T19:26:50.391095Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T19:26:50.071516Z","time spent":"319.575395ms","remote":"127.0.0.1:34754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81723,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-01-30T19:26:50.391101Z","caller":"traceutil/trace.go:171","msg":"trace[1097107751] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:943; }","duration":"279.277824ms","start":"2024-01-30T19:26:50.111819Z","end":"2024-01-30T19:26:50.391096Z","steps":["trace[1097107751] 'agreement among raft nodes before linearized reading'  (duration: 279.176187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:26:50.390816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.677542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T19:26:50.391146Z","caller":"traceutil/trace.go:171","msg":"trace[989156375] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:943; }","duration":"146.002488ms","start":"2024-01-30T19:26:50.245134Z","end":"2024-01-30T19:26:50.391136Z","steps":["trace[989156375] 'agreement among raft nodes before linearized reading'  (duration: 145.666463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:13.272772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.153107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-30T19:27:13.272866Z","caller":"traceutil/trace.go:171","msg":"trace[1455873628] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:976; }","duration":"209.257114ms","start":"2024-01-30T19:27:13.063595Z","end":"2024-01-30T19:27:13.272852Z","steps":["trace[1455873628] 'range keys from in-memory index tree'  (duration: 209.026109ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:13.27333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.914632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81700"}
	{"level":"info","ts":"2024-01-30T19:27:13.273401Z","caller":"traceutil/trace.go:171","msg":"trace[642149910] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:976; }","duration":"202.004123ms","start":"2024-01-30T19:27:13.071385Z","end":"2024-01-30T19:27:13.273389Z","steps":["trace[642149910] 'range keys from in-memory index tree'  (duration: 201.588212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:13.273619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.786251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-30T19:27:13.273677Z","caller":"traceutil/trace.go:171","msg":"trace[1333535653] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:976; }","duration":"194.847423ms","start":"2024-01-30T19:27:13.07882Z","end":"2024-01-30T19:27:13.273668Z","steps":["trace[1333535653] 'range keys from in-memory index tree'  (duration: 194.693688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:13.273957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.862224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81700"}
	{"level":"info","ts":"2024-01-30T19:27:13.274016Z","caller":"traceutil/trace.go:171","msg":"trace[882358274] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:976; }","duration":"161.964667ms","start":"2024-01-30T19:27:13.112043Z","end":"2024-01-30T19:27:13.274007Z","steps":["trace[882358274] 'range keys from in-memory index tree'  (duration: 161.71079ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:27:18.911504Z","caller":"traceutil/trace.go:171","msg":"trace[1478702369] linearizableReadLoop","detail":"{readStateIndex:1034; appliedIndex:1033; }","duration":"283.33355ms","start":"2024-01-30T19:27:18.628155Z","end":"2024-01-30T19:27:18.911489Z","steps":["trace[1478702369] 'read index received'  (duration: 283.151542ms)","trace[1478702369] 'applied index is now lower than readState.Index'  (duration: 181.557µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T19:27:18.912001Z","caller":"traceutil/trace.go:171","msg":"trace[1221004524] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"284.963847ms","start":"2024-01-30T19:27:18.627023Z","end":"2024-01-30T19:27:18.911987Z","steps":["trace[1221004524] 'process raft request'  (duration: 284.33439ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:18.912017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.788784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T19:27:18.912326Z","caller":"traceutil/trace.go:171","msg":"trace[508971819] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1000; }","duration":"284.178709ms","start":"2024-01-30T19:27:18.628139Z","end":"2024-01-30T19:27:18.912317Z","steps":["trace[508971819] 'agreement among raft nodes before linearized reading'  (duration: 283.539219ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T19:27:23.568892Z","caller":"traceutil/trace.go:171","msg":"trace[1694747818] transaction","detail":"{read_only:false; response_revision:1029; number_of_response:1; }","duration":"351.466624ms","start":"2024-01-30T19:27:23.217401Z","end":"2024-01-30T19:27:23.568868Z","steps":["trace[1694747818] 'process raft request'  (duration: 349.411405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:23.569055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T19:27:23.217386Z","time spent":"351.586497ms","remote":"127.0.0.1:34754","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3976,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gc5z2\" mod_revision:728 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gc5z2\" value_size:3903 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gc5z2\" > >"}
	{"level":"info","ts":"2024-01-30T19:27:23.579326Z","caller":"traceutil/trace.go:171","msg":"trace[1793735010] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"310.347313ms","start":"2024-01-30T19:27:23.268962Z","end":"2024-01-30T19:27:23.579309Z","steps":["trace[1793735010] 'process raft request'  (duration: 309.96886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:27:23.579418Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T19:27:23.268945Z","time spent":"310.423467ms","remote":"127.0.0.1:34752","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5676,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-444600\" mod_revision:949 > success:<request_put:<key:\"/registry/minions/addons-444600\" value_size:5637 >> failure:<request_range:<key:\"/registry/minions/addons-444600\" > >"}
	{"level":"info","ts":"2024-01-30T19:28:39.531456Z","caller":"traceutil/trace.go:171","msg":"trace[976408441] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"117.555155ms","start":"2024-01-30T19:28:39.413876Z","end":"2024-01-30T19:28:39.531431Z","steps":["trace[976408441] 'process raft request'  (duration: 117.446065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T19:28:46.997559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.847294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-01-30T19:28:46.997653Z","caller":"traceutil/trace.go:171","msg":"trace[191052953] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-f4c838b1-fb83-40e9-a104-024de96ac353; range_end:; response_count:1; response_revision:1315; }","duration":"112.068794ms","start":"2024-01-30T19:28:46.885567Z","end":"2024-01-30T19:28:46.997636Z","steps":["trace[191052953] 'range keys from in-memory index tree'  (duration: 111.714094ms)"],"step_count":1}
	
	
	==> gcp-auth [21f2a96e7d0f363d7e48a0228ba461bdf1943adf075a5f705675241c207c8b95] <==
	2024/01/30 19:28:42 GCP Auth Webhook started!
	2024/01/30 19:28:43 Ready to marshal response ...
	2024/01/30 19:28:43 Ready to write response ...
	2024/01/30 19:28:43 Ready to marshal response ...
	2024/01/30 19:28:43 Ready to write response ...
	2024/01/30 19:28:54 Ready to marshal response ...
	2024/01/30 19:28:54 Ready to write response ...
	2024/01/30 19:28:56 Ready to marshal response ...
	2024/01/30 19:28:56 Ready to write response ...
	
	
	==> kernel <==
	 19:28:57 up 3 min,  0 users,  load average: 1.03, 1.08, 0.48
	Linux addons-444600 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3d7508efea742f0ae0fdb37cb76e6e0112233e83a81fac0d6ad480246a55652b] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 19:26:18.222815       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.102.221.210"}
	I0130 19:26:18.255816       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0130 19:26:18.468911       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.0.8"}
	W0130 19:26:19.413416       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 19:26:20.357653       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.128.12"}
	I0130 19:26:48.283843       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 19:27:13.346233       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 19:27:13.346356       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 19:27:13.346365       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 19:27:13.347629       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 19:27:13.347723       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 19:27:13.347758       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0130 19:27:15.209515       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.201.234:443: connect: connection refused
	W0130 19:27:15.209521       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 19:27:15.210387       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0130 19:27:15.210990       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.201.234:443: connect: connection refused
	I0130 19:27:15.211005       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0130 19:27:15.215832       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.201.234:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.201.234:443: connect: connection refused
	I0130 19:27:15.300330       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 19:27:48.288647       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 19:28:48.289085       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [4985d47af422b9b72623b0d30a584365872c779cd35a9b2040c8d91f34aa529c] <==
	I0130 19:27:36.799587       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0130 19:27:36.799695       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0130 19:27:36.827274       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:27:37.484104       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:27:37.497464       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:27:37.506842       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:27:37.507394       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0130 19:27:37.567729       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:28:06.022477       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0130 19:28:06.056943       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0130 19:28:07.011039       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:28:07.054627       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0130 19:28:35.871581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="8.46658ms"
	I0130 19:28:42.922542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="14.371317ms"
	I0130 19:28:42.924460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="56.328µs"
	I0130 19:28:43.377262       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0130 19:28:43.396507       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0130 19:28:43.531810       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0130 19:28:48.508390       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0130 19:28:48.508451       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0130 19:28:48.817792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="9.07µs"
	I0130 19:28:49.733694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="15.215418ms"
	I0130 19:28:49.733782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="61.549µs"
	I0130 19:28:54.620552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="6.886µs"
	I0130 19:28:57.096633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="5.121µs"
	
	
	==> kube-proxy [054d8faeaed435069439b44968cd5781d6411766e94e5ad6c6c6549515aa4561] <==
	I0130 19:26:05.505657       1 server_others.go:69] "Using iptables proxy"
	I0130 19:26:05.529612       1 node.go:141] Successfully retrieved node IP: 192.168.39.249
	I0130 19:26:05.793678       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 19:26:05.793699       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 19:26:05.810903       1 server_others.go:152] "Using iptables Proxier"
	I0130 19:26:05.811011       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 19:26:05.811423       1 server.go:846] "Version info" version="v1.28.4"
	I0130 19:26:05.811719       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 19:26:05.821890       1 config.go:188] "Starting service config controller"
	I0130 19:26:05.821942       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 19:26:05.821966       1 config.go:97] "Starting endpoint slice config controller"
	I0130 19:26:05.821970       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 19:26:05.826507       1 config.go:315] "Starting node config controller"
	I0130 19:26:05.826570       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 19:26:05.933785       1 shared_informer.go:318] Caches are synced for service config
	I0130 19:26:05.938252       1 shared_informer.go:318] Caches are synced for node config
	I0130 19:26:05.938483       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9ca0ebe50e27f6c7732cf6999a28c3986871b4f71ac226d9e48fd7871f04f211] <==
	W0130 19:25:48.406635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:48.406643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:48.417514       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 19:25:48.417566       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 19:25:49.228923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 19:25:49.229021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 19:25:49.233582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 19:25:49.233690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 19:25:49.246000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:49.246261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:49.274307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 19:25:49.274674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 19:25:49.387970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 19:25:49.388077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 19:25:49.406356       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 19:25:49.406455       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 19:25:49.456082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 19:25:49.456279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 19:25:49.458628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:49.458687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 19:25:49.473722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 19:25:49.473750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 19:25:49.545463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 19:25:49.545583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0130 19:25:51.666947       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 19:25:16 UTC, ends at Tue 2024-01-30 19:28:57 UTC. --
	Jan 30 19:28:54 addons-444600 kubelet[1204]: I0130 19:28:54.374261    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnmn\" (UniqueName: \"kubernetes.io/projected/796163b8-5b10-48ce-a033-8319543ac154-kube-api-access-vhnmn\") pod \"registry-test\" (UID: \"796163b8-5b10-48ce-a033-8319543ac154\") " pod="default/registry-test"
	Jan 30 19:28:54 addons-444600 kubelet[1204]: I0130 19:28:54.374367    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/796163b8-5b10-48ce-a033-8319543ac154-gcp-creds\") pod \"registry-test\" (UID: \"796163b8-5b10-48ce-a033-8319543ac154\") " pod="default/registry-test"
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.291073    1204 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltd5m\" (UniqueName: \"kubernetes.io/projected/4dbd1494-c837-4f1f-b1a9-38a421d6194b-kube-api-access-ltd5m\") pod \"4dbd1494-c837-4f1f-b1a9-38a421d6194b\" (UID: \"4dbd1494-c837-4f1f-b1a9-38a421d6194b\") "
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.305660    1204 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbd1494-c837-4f1f-b1a9-38a421d6194b-kube-api-access-ltd5m" (OuterVolumeSpecName: "kube-api-access-ltd5m") pod "4dbd1494-c837-4f1f-b1a9-38a421d6194b" (UID: "4dbd1494-c837-4f1f-b1a9-38a421d6194b"). InnerVolumeSpecName "kube-api-access-ltd5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.393257    1204 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdtt\" (UniqueName: \"kubernetes.io/projected/5b497734-4f9c-4c32-9c65-b2466fbae6d3-kube-api-access-qwdtt\") pod \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\" (UID: \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\") "
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.393317    1204 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\") pod \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\" (UID: \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\") "
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.394404    1204 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-gcp-creds\") pod \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\" (UID: \"5b497734-4f9c-4c32-9c65-b2466fbae6d3\") "
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.394519    1204 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ltd5m\" (UniqueName: \"kubernetes.io/projected/4dbd1494-c837-4f1f-b1a9-38a421d6194b-kube-api-access-ltd5m\") on node \"addons-444600\" DevicePath \"\""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.394548    1204 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5b497734-4f9c-4c32-9c65-b2466fbae6d3" (UID: "5b497734-4f9c-4c32-9c65-b2466fbae6d3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.394575    1204 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-pvc-f4c838b1-fb83-40e9-a104-024de96ac353" (OuterVolumeSpecName: "data") pod "5b497734-4f9c-4c32-9c65-b2466fbae6d3" (UID: "5b497734-4f9c-4c32-9c65-b2466fbae6d3"). InnerVolumeSpecName "pvc-f4c838b1-fb83-40e9-a104-024de96ac353". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.402429    1204 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b497734-4f9c-4c32-9c65-b2466fbae6d3-kube-api-access-qwdtt" (OuterVolumeSpecName: "kube-api-access-qwdtt") pod "5b497734-4f9c-4c32-9c65-b2466fbae6d3" (UID: "5b497734-4f9c-4c32-9c65-b2466fbae6d3"). InnerVolumeSpecName "kube-api-access-qwdtt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.496281    1204 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-gcp-creds\") on node \"addons-444600\" DevicePath \"\""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.496401    1204 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qwdtt\" (UniqueName: \"kubernetes.io/projected/5b497734-4f9c-4c32-9c65-b2466fbae6d3-kube-api-access-qwdtt\") on node \"addons-444600\" DevicePath \"\""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.496415    1204 reconciler_common.go:300] "Volume detached for volume \"pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" (UniqueName: \"kubernetes.io/host-path/5b497734-4f9c-4c32-9c65-b2466fbae6d3-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\") on node \"addons-444600\" DevicePath \"\""
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.945666    1204 scope.go:117] "RemoveContainer" containerID="ec405f12e2b0e27a0915cda16d8d70df4e76807b51bc9916f6fb93ee7c8c7c23"
	Jan 30 19:28:55 addons-444600 kubelet[1204]: I0130 19:28:55.954382    1204 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857adbe1f5ba0679a77e98424a510aae0f157f9b84f80d8226ab44dc5f2ad6d4"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.062152    1204 topology_manager.go:215] "Topology Admit Handler" podUID="02df0124-e892-4751-b535-fd36aebb5505" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: E0130 19:28:56.062302    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dbd1494-c837-4f1f-b1a9-38a421d6194b" containerName="cloud-spanner-emulator"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: E0130 19:28:56.062317    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b497734-4f9c-4c32-9c65-b2466fbae6d3" containerName="busybox"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.062354    1204 memory_manager.go:346] "RemoveStaleState removing state" podUID="4dbd1494-c837-4f1f-b1a9-38a421d6194b" containerName="cloud-spanner-emulator"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.062362    1204 memory_manager.go:346] "RemoveStaleState removing state" podUID="5b497734-4f9c-4c32-9c65-b2466fbae6d3" containerName="busybox"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.204514    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/02df0124-e892-4751-b535-fd36aebb5505-script\") pod \"helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" (UID: \"02df0124-e892-4751-b535-fd36aebb5505\") " pod="local-path-storage/helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.204841    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/02df0124-e892-4751-b535-fd36aebb5505-gcp-creds\") pod \"helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" (UID: \"02df0124-e892-4751-b535-fd36aebb5505\") " pod="local-path-storage/helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.204957    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/02df0124-e892-4751-b535-fd36aebb5505-data\") pod \"helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" (UID: \"02df0124-e892-4751-b535-fd36aebb5505\") " pod="local-path-storage/helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353"
	Jan 30 19:28:56 addons-444600 kubelet[1204]: I0130 19:28:56.205069    1204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57l54\" (UniqueName: \"kubernetes.io/projected/02df0124-e892-4751-b535-fd36aebb5505-kube-api-access-57l54\") pod \"helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353\" (UID: \"02df0124-e892-4751-b535-fd36aebb5505\") " pod="local-path-storage/helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353"
	
	
	==> storage-provisioner [54277e83cf20f665a11210cf8c50f80275936e52588d401443c12677cebe175d] <==
	I0130 19:26:17.649849       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 19:26:17.701660       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 19:26:17.701708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 19:26:17.747720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 19:26:17.759335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56e67287-8a39-4ad8-a85f-a3d5dfba5780", APIVersion:"v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-444600_4f74baf1-7f45-4f2e-8683-38559617c0c5 became leader
	I0130 19:26:17.760742       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-444600_4f74baf1-7f45-4f2e-8683-38559617c0c5!
	I0130 19:26:17.969064       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-444600_4f74baf1-7f45-4f2e-8683-38559617c0c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444600 -n addons-444600
helpers_test.go:261: (dbg) Run:  kubectl --context addons-444600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-gc5z2 ingress-nginx-admission-patch-dgs9l helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-444600 describe pod ingress-nginx-admission-create-gc5z2 ingress-nginx-admission-patch-dgs9l helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-444600 describe pod ingress-nginx-admission-create-gc5z2 ingress-nginx-admission-patch-dgs9l helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353: exit status 1 (58.808525ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gc5z2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dgs9l" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-444600 describe pod ingress-nginx-admission-create-gc5z2 ingress-nginx-admission-patch-dgs9l helper-pod-delete-pvc-f4c838b1-fb83-40e9-a104-024de96ac353: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-395377 /tmp/TestFunctionalserialCacheCmdcacheadd_local1437219464/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache add minikube-local-cache-test:functional-395377
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 cache add minikube-local-cache-test:functional-395377: exit status 10 (909.668905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/minikube-local-cache-test_functional-395377": write: unable to calculate manifest: blob sha256:bcc856be435f26c441b5a185983b0b79f996c18ff1c77a765da5442c0c268167 not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_ba4d3209579c224679143538a8d01dabecff7221_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-395377". args "out/minikube-linux-amd64 -p functional-395377 cache add minikube-local-cache-test:functional-395377" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache delete minikube-local-cache-test:functional-395377
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 cache delete minikube-local-cache-test:functional-395377: exit status 30 (61.955371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/minikube-local-cache-test_functional-395377: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_dfe49630ac9fdf4afe54f01c9a333dac37c59f38_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-395377". args "out/minikube-linux-amd64 -p functional-395377 cache delete minikube-local-cache-test:functional-395377" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-395377
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr: exit status 80 (815.202004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:01.969421   18654 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:01.969705   18654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:01.969715   18654 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:01.969722   18654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:01.969910   18654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:01.970489   18654 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:01.970580   18654 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:01.970770   18654 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:01.972231   18654 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc00048f7a0 tarballImage:<nil> computed:false id:0xc000b09580 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:01.972260   18654 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:02.715479   18654 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 744.907025ms
	I0130 19:35:02.718419   18654 out.go:177] 
	W0130 19:35:02.720124   18654 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:35:02.720144   18654 out.go:239] * 
	* 
	W0130 19:35:02.721988   18654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:02.723802   18654 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:01.969421   18654 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:01.969705   18654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:01.969715   18654 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:01.969722   18654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:01.969910   18654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:01.970489   18654 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:01.970580   18654 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:01.970770   18654 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:01.972231   18654 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc00048f7a0 tarballImage:<nil> computed:false id:0xc000b09580 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:01.972260   18654 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:02.715479   18654 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 744.907025ms
	I0130 19:35:02.718419   18654 out.go:177] 
	W0130 19:35:02.720124   18654 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:35:02.720144   18654 out.go:239] * 
	* 
	W0130 19:35:02.721988   18654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:02.723802   18654 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr: exit status 80 (945.645318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:02.790132   18666 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:02.790254   18666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:02.790263   18666 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:02.790269   18666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:02.790443   18666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:02.790989   18666 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:02.791052   18666 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:02.791131   18666 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:02.792629   18666 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc000030230 tarballImage:<nil> computed:false id:0xc00043a0e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:02.792652   18666 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:03.663191   18666 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 872.145363ms
	I0130 19:35:03.665804   18666 out.go:177] 
	W0130 19:35:03.667370   18666 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:35:03.667390   18666 out.go:239] * 
	* 
	W0130 19:35:03.669214   18666 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:03.670635   18666 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:02.790132   18666 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:02.790254   18666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:02.790263   18666 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:02.790269   18666 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:02.790443   18666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:02.790989   18666 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:02.791052   18666 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:02.791131   18666 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:02.792629   18666 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc000030230 tarballImage:<nil> computed:false id:0xc00043a0e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:02.792652   18666 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:03.663191   18666 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 872.145363ms
	I0130 19:35:03.665804   18666 out.go:177] 
	W0130 19:35:03.667370   18666 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 19:35:03.667390   18666 out.go:239] * 
	* 
	W0130 19:35:03.669214   18666 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:03.670635   18666 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.036687976s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-395377
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 image load --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr: exit status 80 (730.190505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:05.788085   18771 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:05.788213   18771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:05.788221   18771 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:05.788226   18771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:05.788413   18771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:05.789004   18771 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:05.789064   18771 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:05.789143   18771 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:05.790648   18771 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc00055a000 tarballImage:<nil> computed:false id:0xc000992220 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:05.790678   18771 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:06.448681   18771 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 659.621437ms
	I0130 19:35:06.451218   18771 out.go:177] 
	W0130 19:35:06.452717   18771 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 19:35:06.452738   18771 out.go:239] * 
	* 
	W0130 19:35:06.454599   18771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:06.455980   18771 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:05.788085   18771 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:05.788213   18771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:05.788221   18771 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:05.788226   18771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:05.788413   18771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:05.789004   18771 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:05.789064   18771 cache.go:107] acquiring lock: {Name:mk2bc7f99dd0ee260f56aab31f16e51ccf64f154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:35:05.789143   18771 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-395377
	I0130 19:35:05.790648   18771 image.go:173] found gcr.io/google-containers/addon-resizer:functional-395377 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-395377 original:gcr.io/google-containers/addon-resizer:functional-395377} opener:0xc00055a000 tarballImage:<nil> computed:false id:0xc000992220 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 19:35:05.790678   18771 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377
	I0130 19:35:06.448681   18771 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-395377" -> "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377" took 659.621437ms
	I0130 19:35:06.451218   18771 out.go:177] 
	W0130 19:35:06.452717   18771 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 19:35:06.452738   18771 out.go:239] * 
	* 
	W0130 19:35:06.454599   18771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:06.455980   18771 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image save gcr.io/google-containers/addon-resizer:functional-395377 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0130 19:35:07.615941   19008 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:07.616298   19008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.616311   19008 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:07.616318   19008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.616606   19008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:07.617480   19008 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.617637   19008 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.618184   19008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.618246   19008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.633802   19008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0130 19:35:07.634286   19008 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.634979   19008 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.635015   19008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.635383   19008 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.635601   19008 main.go:141] libmachine: (functional-395377) Calling .GetState
	I0130 19:35:07.637975   19008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.638023   19008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.653300   19008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0130 19:35:07.653726   19008 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.654287   19008 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.654331   19008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.654623   19008 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.654821   19008 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:35:07.655077   19008 ssh_runner.go:195] Run: systemctl --version
	I0130 19:35:07.655110   19008 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
	I0130 19:35:07.658784   19008 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.659239   19008 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
	I0130 19:35:07.659309   19008 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.659424   19008 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
	I0130 19:35:07.659577   19008 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
	I0130 19:35:07.659751   19008 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
	I0130 19:35:07.659882   19008 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
	I0130 19:35:07.746420   19008 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
	W0130 19:35:07.746471   19008 cache_images.go:254] Failed to load cached images for profile functional-395377. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: no such file or directory
	I0130 19:35:07.746517   19008 cache_images.go:262] succeeded pushing to: 
	I0130 19:35:07.746524   19008 cache_images.go:263] failed pushing to: functional-395377
	I0130 19:35:07.746560   19008 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:07.746582   19008 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:07.746854   19008 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:07.746871   19008 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:07.746876   19008 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:35:07.746889   19008 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:07.746902   19008 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:07.747116   19008 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:07.747143   19008 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:07.747156   19008 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-395377
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image save --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 image save --daemon gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr: exit status 80 (684.958493ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:07.837575   19083 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:07.837742   19083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.837752   19083 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:07.837757   19083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.838068   19083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:07.838810   19083 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.838849   19083 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-395377"]
	I0130 19:35:07.838986   19083 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.839374   19083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.839417   19083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.855698   19083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0130 19:35:07.856152   19083 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.856811   19083 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.856832   19083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.857186   19083 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.857540   19083 main.go:141] libmachine: (functional-395377) Calling .GetState
	I0130 19:35:07.859687   19083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.859737   19083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.875131   19083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0130 19:35:07.875543   19083 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.876009   19083 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.876038   19083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.876391   19083 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.876594   19083 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:35:07.876738   19083 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-395377]
	I0130 19:35:07.876852   19083 ssh_runner.go:195] Run: systemctl --version
	I0130 19:35:07.876880   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
	I0130 19:35:07.880512   19083 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.880897   19083 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
	I0130 19:35:07.880936   19083 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.881125   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
	I0130 19:35:07.881324   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
	I0130 19:35:07.881481   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
	I0130 19:35:07.881631   19083 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
	I0130 19:35:07.974811   19083 containerd.go:252] Checking existence of image with name "gcr.io/google-containers/addon-resizer:functional-395377" and sha ""
	I0130 19:35:07.974876   19083 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0130 19:35:08.448851   19083 cache_images.go:345] SaveImages completed in 572.089412ms
	W0130 19:35:08.448890   19083 cache_images.go:442] Failed to load cached images for profile functional-395377. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-395377 not found
	I0130 19:35:08.448907   19083 cache_images.go:450] succeeded pulling from : 
	I0130 19:35:08.448912   19083 cache_images.go:451] failed pulling from : functional-395377
	I0130 19:35:08.448939   19083 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:08.448954   19083 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:08.449224   19083 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:08.449243   19083 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:35:08.449242   19083 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:08.449252   19083 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:08.449261   19083 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:08.449467   19083 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:08.449517   19083 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:08.449556   19083 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:35:08.451578   19083 out.go:177] 
	W0130 19:35:08.453071   19083 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377: no such file or directory
	W0130 19:35:08.453091   19083 out.go:239] * 
	* 
	W0130 19:35:08.454897   19083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:08.456401   19083 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:35:07.837575   19083 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:35:07.837742   19083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.837752   19083 out.go:309] Setting ErrFile to fd 2...
	I0130 19:35:07.837757   19083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:35:07.838068   19083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:35:07.838810   19083 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.838849   19083 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-395377"]
	I0130 19:35:07.838986   19083 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:35:07.839374   19083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.839417   19083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.855698   19083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0130 19:35:07.856152   19083 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.856811   19083 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.856832   19083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.857186   19083 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.857540   19083 main.go:141] libmachine: (functional-395377) Calling .GetState
	I0130 19:35:07.859687   19083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:35:07.859737   19083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:35:07.875131   19083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0130 19:35:07.875543   19083 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:35:07.876009   19083 main.go:141] libmachine: Using API Version  1
	I0130 19:35:07.876038   19083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:35:07.876391   19083 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:35:07.876594   19083 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:35:07.876738   19083 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-395377]
	I0130 19:35:07.876852   19083 ssh_runner.go:195] Run: systemctl --version
	I0130 19:35:07.876880   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
	I0130 19:35:07.880512   19083 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.880897   19083 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
	I0130 19:35:07.880936   19083 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
	I0130 19:35:07.881125   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
	I0130 19:35:07.881324   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
	I0130 19:35:07.881481   19083 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
	I0130 19:35:07.881631   19083 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
	I0130 19:35:07.974811   19083 containerd.go:252] Checking existence of image with name "gcr.io/google-containers/addon-resizer:functional-395377" and sha ""
	I0130 19:35:07.974876   19083 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0130 19:35:08.448851   19083 cache_images.go:345] SaveImages completed in 572.089412ms
	W0130 19:35:08.448890   19083 cache_images.go:442] Failed to load cached images for profile functional-395377. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-395377 not found
	I0130 19:35:08.448907   19083 cache_images.go:450] succeeded pulling from : 
	I0130 19:35:08.448912   19083 cache_images.go:451] failed pulling from : functional-395377
	I0130 19:35:08.448939   19083 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:08.448954   19083 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:08.449224   19083 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:08.449243   19083 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:35:08.449242   19083 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:08.449252   19083 main.go:141] libmachine: Making call to close driver server
	I0130 19:35:08.449261   19083 main.go:141] libmachine: (functional-395377) Calling .Close
	I0130 19:35:08.449467   19083 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
	I0130 19:35:08.449517   19083 main.go:141] libmachine: Successfully made call to close driver server
	I0130 19:35:08.449556   19083 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 19:35:08.451578   19083 out.go:177] 
	W0130 19:35:08.453071   19083 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-4431/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-395377: no such file or directory
	W0130 19:35:08.453091   19083 out.go:239] * 
	* 
	W0130 19:35:08.454897   19083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 19:35:08.456401   19083 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    

Test pass (271/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 53.37
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 42.81
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 44.37
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 66.14
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 219.87
38 TestAddons/parallel/Registry 17.78
39 TestAddons/parallel/Ingress 21.29
40 TestAddons/parallel/InspektorGadget 11
41 TestAddons/parallel/MetricsServer 5.92
42 TestAddons/parallel/HelmTiller 21.21
44 TestAddons/parallel/CSI 72.3
46 TestAddons/parallel/CloudSpanner 5.68
47 TestAddons/parallel/LocalPath 56.69
48 TestAddons/parallel/NvidiaDevicePlugin 5.6
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 92.53
54 TestCertOptions 76.67
55 TestCertExpiration 259.49
57 TestForceSystemdFlag 91.59
58 TestForceSystemdEnv 62.79
60 TestKVMDriverInstallOrUpdate 7.97
64 TestErrorSpam/setup 46.49
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.56
68 TestErrorSpam/unpause 1.63
69 TestErrorSpam/stop 2.26
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 71.44
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.52
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 42.72
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.47
92 TestFunctional/serial/LogsFileCmd 1.5
93 TestFunctional/serial/InvalidService 4.5
95 TestFunctional/parallel/ConfigCmd 0.44
96 TestFunctional/parallel/DashboardCmd 44.84
97 TestFunctional/parallel/DryRun 0.35
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.04
103 TestFunctional/parallel/ServiceCmdConnect 9.53
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 45.66
107 TestFunctional/parallel/SSHCmd 0.46
108 TestFunctional/parallel/CpCmd 1.59
109 TestFunctional/parallel/MySQL 33.23
110 TestFunctional/parallel/FileSync 0.32
111 TestFunctional/parallel/CertSync 1.88
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 0.53
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
122 TestFunctional/parallel/ProfileCmd/profile_list 0.38
123 TestFunctional/parallel/MountCmd/any-port 9.62
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 0.66
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 5.43
141 TestFunctional/parallel/ImageCommands/Setup 2.16
146 TestFunctional/parallel/MountCmd/specific-port 1.87
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
149 TestFunctional/parallel/ServiceCmd/List 0.28
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
153 TestFunctional/parallel/ServiceCmd/Format 0.41
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
155 TestFunctional/parallel/ServiceCmd/URL 0.51
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestIngressAddonLegacy/StartLegacyK8sCluster 85.34
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.02
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 29.94
172 TestJSONOutput/start/Command 100.06
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.67
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.64
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 2.09
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.22
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 97.75
204 TestMountStart/serial/StartWithMountFirst 28.2
205 TestMountStart/serial/VerifyMountFirst 0.41
206 TestMountStart/serial/StartWithMountSecond 28.66
207 TestMountStart/serial/VerifyMountSecond 0.39
208 TestMountStart/serial/DeleteFirst 0.85
209 TestMountStart/serial/VerifyMountPostDelete 0.39
210 TestMountStart/serial/Stop 1.13
211 TestMountStart/serial/RestartStopped 24.57
212 TestMountStart/serial/VerifyMountPostStop 0.41
215 TestMultiNode/serial/FreshStart2Nodes 173.37
216 TestMultiNode/serial/DeployApp2Nodes 5.59
217 TestMultiNode/serial/PingHostFrom2Pods 0.91
218 TestMultiNode/serial/AddNode 42.75
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.74
222 TestMultiNode/serial/StopNode 2.19
223 TestMultiNode/serial/StartAfterStop 28.11
224 TestMultiNode/serial/RestartKeepsNodes 316.19
225 TestMultiNode/serial/DeleteNode 1.76
226 TestMultiNode/serial/StopMultiNode 183.7
227 TestMultiNode/serial/RestartMultiNode 95.31
228 TestMultiNode/serial/ValidateNameConflict 51.68
233 TestPreload 352.34
235 TestScheduledStopUnix 116.95
239 TestRunningBinaryUpgrade 251.1
241 TestKubernetesUpgrade 192.19
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
252 TestNoKubernetes/serial/StartWithK8s 98.44
253 TestStoppedBinaryUpgrade/Setup 2.49
254 TestStoppedBinaryUpgrade/Upgrade 180.35
255 TestNoKubernetes/serial/StartWithStopK8s 74.65
256 TestNoKubernetes/serial/Start 40.2
258 TestPause/serial/Start 85.06
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
260 TestNoKubernetes/serial/ProfileList 15.66
261 TestNoKubernetes/serial/Stop 1.12
262 TestNoKubernetes/serial/StartNoArgs 32.92
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
271 TestNetworkPlugins/group/false 3.59
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
276 TestPause/serial/SecondStartNoReconfiguration 66.57
277 TestPause/serial/Pause 1.46
278 TestPause/serial/VerifyStatus 0.32
279 TestPause/serial/Unpause 0.92
280 TestPause/serial/PauseAgain 1.05
281 TestPause/serial/DeletePaused 0.89
282 TestPause/serial/VerifyDeletedResources 0.4
284 TestStartStop/group/old-k8s-version/serial/FirstStart 157.25
286 TestStartStop/group/no-preload/serial/FirstStart 202.37
288 TestStartStop/group/embed-certs/serial/FirstStart 87.44
289 TestStartStop/group/embed-certs/serial/DeployApp 10.3
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
291 TestStartStop/group/embed-certs/serial/Stop 91.76
292 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
294 TestStartStop/group/old-k8s-version/serial/Stop 92.13
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.27
297 TestStartStop/group/no-preload/serial/DeployApp 10.31
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
299 TestStartStop/group/no-preload/serial/Stop 91.58
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/embed-certs/serial/SecondStart 327.87
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.26
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/old-k8s-version/serial/SecondStart 93.39
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
308 TestStartStop/group/no-preload/serial/SecondStart 312.51
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 310.96
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
314 TestStartStop/group/old-k8s-version/serial/Pause 2.66
316 TestStartStop/group/newest-cni/serial/FirstStart 59.41
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.68
319 TestStartStop/group/newest-cni/serial/Stop 2.11
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/newest-cni/serial/SecondStart 47.08
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
325 TestStartStop/group/newest-cni/serial/Pause 2.61
326 TestNetworkPlugins/group/auto/Start 101.74
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
330 TestStartStop/group/embed-certs/serial/Pause 2.75
331 TestNetworkPlugins/group/kindnet/Start 73.34
332 TestNetworkPlugins/group/auto/KubeletFlags 0.25
333 TestNetworkPlugins/group/auto/NetCatPod 9.28
334 TestNetworkPlugins/group/auto/DNS 0.19
335 TestNetworkPlugins/group/auto/Localhost 0.16
336 TestNetworkPlugins/group/auto/HairPin 0.16
337 TestNetworkPlugins/group/calico/Start 105.89
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/no-preload/serial/Pause 3.17
342 TestNetworkPlugins/group/custom-flannel/Start 88.62
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.34
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
349 TestNetworkPlugins/group/kindnet/NetCatPod 9.4
350 TestNetworkPlugins/group/enable-default-cni/Start 114.6
351 TestNetworkPlugins/group/kindnet/DNS 0.19
352 TestNetworkPlugins/group/kindnet/Localhost 0.14
353 TestNetworkPlugins/group/kindnet/HairPin 0.15
354 TestNetworkPlugins/group/flannel/Start 105.74
355 TestNetworkPlugins/group/calico/ControllerPod 6.01
356 TestNetworkPlugins/group/calico/KubeletFlags 0.27
357 TestNetworkPlugins/group/calico/NetCatPod 13.28
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.26
360 TestNetworkPlugins/group/calico/DNS 0.23
361 TestNetworkPlugins/group/calico/Localhost 0.39
362 TestNetworkPlugins/group/calico/HairPin 0.23
363 TestNetworkPlugins/group/custom-flannel/DNS 0.18
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
366 TestNetworkPlugins/group/bridge/Start 66.78
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
374 TestNetworkPlugins/group/flannel/NetCatPod 11.23
375 TestNetworkPlugins/group/flannel/DNS 0.18
376 TestNetworkPlugins/group/flannel/Localhost 0.14
377 TestNetworkPlugins/group/flannel/HairPin 0.16
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
379 TestNetworkPlugins/group/bridge/NetCatPod 9.26
380 TestNetworkPlugins/group/bridge/DNS 0.18
381 TestNetworkPlugins/group/bridge/Localhost 0.16
382 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (53.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-186149 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-186149 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (53.37300883s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (53.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-186149
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-186149: exit status 85 (73.679488ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |          |
	|         | -p download-only-186149        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:22:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:22:40.688957   11647 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:22:40.689099   11647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:22:40.689112   11647 out.go:309] Setting ErrFile to fd 2...
	I0130 19:22:40.689119   11647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:22:40.689317   11647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	W0130 19:22:40.689458   11647 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18007-4431/.minikube/config/config.json: open /home/jenkins/minikube-integration/18007-4431/.minikube/config/config.json: no such file or directory
	I0130 19:22:40.690081   11647 out.go:303] Setting JSON to true
	I0130 19:22:40.691000   11647 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":305,"bootTime":1706642256,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:22:40.691056   11647 start.go:138] virtualization: kvm guest
	I0130 19:22:40.693798   11647 out.go:97] [download-only-186149] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:22:40.695528   11647 out.go:169] MINIKUBE_LOCATION=18007
	W0130 19:22:40.693978   11647 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball: no such file or directory
	I0130 19:22:40.694051   11647 notify.go:220] Checking for updates...
	I0130 19:22:40.698557   11647 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:22:40.700189   11647 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:22:40.701634   11647 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:22:40.703138   11647 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:22:40.706059   11647 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:22:40.706294   11647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:22:41.194859   11647 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:22:41.194889   11647 start.go:298] selected driver: kvm2
	I0130 19:22:41.194894   11647 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:22:41.195246   11647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:22:41.195367   11647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4431/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:22:41.209582   11647 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:22:41.209651   11647 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:22:41.210138   11647 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:22:41.210285   11647 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:22:41.210335   11647 cni.go:84] Creating CNI manager for ""
	I0130 19:22:41.210348   11647 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:22:41.210359   11647 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:22:41.210365   11647 start_flags.go:321] config:
	{Name:download-only-186149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-186149 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:22:41.210562   11647 iso.go:125] acquiring lock: {Name:mk030d287e6065b337323be40f294429c246fc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:22:41.212784   11647 out.go:97] Downloading VM boot image ...
	I0130 19:22:41.212829   11647 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 19:22:50.356153   11647 out.go:97] Starting control plane node download-only-186149 in cluster download-only-186149
	I0130 19:22:50.356196   11647 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0130 19:22:50.461595   11647 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0130 19:22:50.461625   11647 cache.go:56] Caching tarball of preloaded images
	I0130 19:22:50.461758   11647 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0130 19:22:50.463980   11647 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0130 19:22:50.464001   11647 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:22:50.575664   11647 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0130 19:23:05.668187   11647 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:23:05.668273   11647 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:23:06.566163   11647 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0130 19:23:06.566539   11647 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-186149/config.json ...
	I0130 19:23:06.566569   11647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-186149/config.json: {Name:mk20be565a9a73990d070e0e195370b4be456215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:23:06.566712   11647 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0130 19:23:06.566862   11647 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-186149"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-186149
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (42.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-315124 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-315124 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (42.809409801s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (42.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-315124
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-315124: exit status 85 (71.652741ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |                     |
	|         | -p download-only-186149        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| delete  | -p download-only-186149        | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| start   | -o=json --download-only        | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC |                     |
	|         | -p download-only-315124        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:23:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:23:34.411216   11943 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:23:34.411455   11943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:23:34.411464   11943 out.go:309] Setting ErrFile to fd 2...
	I0130 19:23:34.411468   11943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:23:34.411660   11943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:23:34.412218   11943 out.go:303] Setting JSON to true
	I0130 19:23:34.413007   11943 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":359,"bootTime":1706642256,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:23:34.413064   11943 start.go:138] virtualization: kvm guest
	I0130 19:23:34.415401   11943 out.go:97] [download-only-315124] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:23:34.417228   11943 out.go:169] MINIKUBE_LOCATION=18007
	I0130 19:23:34.415543   11943 notify.go:220] Checking for updates...
	I0130 19:23:34.420508   11943 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:23:34.421982   11943 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:23:34.423430   11943 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:23:34.424761   11943 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:23:34.427217   11943 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:23:34.427404   11943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:23:34.460737   11943 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:23:34.460768   11943 start.go:298] selected driver: kvm2
	I0130 19:23:34.460774   11943 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:23:34.461098   11943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:23:34.461165   11943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4431/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:23:34.475465   11943 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:23:34.475511   11943 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:23:34.475937   11943 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:23:34.476069   11943 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:23:34.476123   11943 cni.go:84] Creating CNI manager for ""
	I0130 19:23:34.476137   11943 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:23:34.476145   11943 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:23:34.476153   11943 start_flags.go:321] config:
	{Name:download-only-315124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-315124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:23:34.476312   11943 iso.go:125] acquiring lock: {Name:mk030d287e6065b337323be40f294429c246fc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:23:34.478312   11943 out.go:97] Starting control plane node download-only-315124 in cluster download-only-315124
	I0130 19:23:34.478330   11943 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0130 19:23:34.586211   11943 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0130 19:23:34.586251   11943 cache.go:56] Caching tarball of preloaded images
	I0130 19:23:34.586394   11943 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0130 19:23:34.588280   11943 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0130 19:23:34.588298   11943 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:23:34.697501   11943 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0130 19:23:48.649547   11943 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:23:48.649631   11943 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:23:49.585637   11943 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0130 19:23:49.585991   11943 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-315124/config.json ...
	I0130 19:23:49.586033   11943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-315124/config.json: {Name:mk22f9d132749adc75828ab828dbfa8f793d5de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:23:49.586224   11943 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0130 19:23:49.586379   11943 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-315124"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-315124
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (44.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-027774 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-027774 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (44.366768243s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (44.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-027774
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-027774: exit status 85 (74.95465ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:22 UTC |                     |
	|         | -p download-only-186149           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| delete  | -p download-only-186149           | download-only-186149 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC | 30 Jan 24 19:23 UTC |
	| start   | -o=json --download-only           | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:23 UTC |                     |
	|         | -p download-only-315124           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| delete  | -p download-only-315124           | download-only-315124 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC | 30 Jan 24 19:24 UTC |
	| start   | -o=json --download-only           | download-only-027774 | jenkins | v1.32.0 | 30 Jan 24 19:24 UTC |                     |
	|         | -p download-only-027774           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 19:24:17
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 19:24:17.560164   12181 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:24:17.560311   12181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:24:17.560323   12181 out.go:309] Setting ErrFile to fd 2...
	I0130 19:24:17.560330   12181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:24:17.560543   12181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:24:17.561083   12181 out.go:303] Setting JSON to true
	I0130 19:24:17.561835   12181 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":402,"bootTime":1706642256,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:24:17.561895   12181 start.go:138] virtualization: kvm guest
	I0130 19:24:17.564042   12181 out.go:97] [download-only-027774] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:24:17.565598   12181 out.go:169] MINIKUBE_LOCATION=18007
	I0130 19:24:17.564205   12181 notify.go:220] Checking for updates...
	I0130 19:24:17.568357   12181 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:24:17.569729   12181 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:24:17.570966   12181 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:24:17.572354   12181 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 19:24:17.574872   12181 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 19:24:17.575107   12181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:24:17.606382   12181 out.go:97] Using the kvm2 driver based on user configuration
	I0130 19:24:17.606406   12181 start.go:298] selected driver: kvm2
	I0130 19:24:17.606413   12181 start.go:902] validating driver "kvm2" against <nil>
	I0130 19:24:17.606715   12181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:24:17.606791   12181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18007-4431/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 19:24:17.620620   12181 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 19:24:17.620695   12181 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 19:24:17.621339   12181 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 19:24:17.621519   12181 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 19:24:17.621596   12181 cni.go:84] Creating CNI manager for ""
	I0130 19:24:17.621610   12181 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0130 19:24:17.621629   12181 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 19:24:17.621641   12181 start_flags.go:321] config:
	{Name:download-only-027774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-027774 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:24:17.621792   12181 iso.go:125] acquiring lock: {Name:mk030d287e6065b337323be40f294429c246fc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 19:24:17.623505   12181 out.go:97] Starting control plane node download-only-027774 in cluster download-only-027774
	I0130 19:24:17.623521   12181 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0130 19:24:17.727742   12181 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0130 19:24:17.727770   12181 cache.go:56] Caching tarball of preloaded images
	I0130 19:24:17.727929   12181 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0130 19:24:17.729869   12181 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0130 19:24:17.729890   12181 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:24:17.840967   12181 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0130 19:24:34.965833   12181 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:24:34.965946   12181 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-4431/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0130 19:24:35.775780   12181 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0130 19:24:35.776118   12181 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-027774/config.json ...
	I0130 19:24:35.776149   12181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/download-only-027774/config.json: {Name:mke58c3f17bf29b5a9e9702ee8f6e6347de39368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 19:24:35.776339   12181 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0130 19:24:35.776500   12181 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18007-4431/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-027774"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-027774
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-338640 --alsologtostderr --binary-mirror http://127.0.0.1:36043 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-338640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-338640
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (66.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-619084 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-619084 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m5.064263796s)
helpers_test.go:175: Cleaning up "offline-containerd-619084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-619084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-619084: (1.076331456s)
--- PASS: TestOffline (66.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444600
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-444600: exit status 85 (67.988446ms)

                                                
                                                
-- stdout --
	* Profile "addons-444600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444600"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444600
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-444600: exit status 85 (66.732653ms)

                                                
                                                
-- stdout --
	* Profile "addons-444600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444600"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-444600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-444600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m39.874018376s)
--- PASS: TestAddons/Setup (219.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 19.176961ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-x98sf" [8d3a9cf6-39e5-437f-bda5-4dd87d2ca039] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005188447s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bsjjs" [5f9a3c47-9f01-4b1f-b10f-9da8dc70fcf6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009491528s
addons_test.go:340: (dbg) Run:  kubectl --context addons-444600 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-444600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-444600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.925566228s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 ip
2024/01/30 19:29:00 [DEBUG] GET http://192.168.39.249:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-444600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-444600 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-444600 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [93760368-62a7-4e42-be5c-31320a927407] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [93760368-62a7-4e42-be5c-31320a927407] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004372592s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-444600 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.249
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-444600 addons disable ingress-dns --alsologtostderr -v=1: (1.074078339s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-444600 addons disable ingress --alsologtostderr -v=1: (7.83462297s)
--- PASS: TestAddons/parallel/Ingress (21.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d8rvs" [aa8c8e0b-5b06-4b70-9287-3909ff20fbad] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00434011s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444600
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444600: (5.996788082s)
--- PASS: TestAddons/parallel/InspektorGadget (11.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 18.988268ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-lslrk" [a45f01f1-c37f-4e4f-8f61-7aa191b86125] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007548721s
addons_test.go:415: (dbg) Run:  kubectl --context addons-444600 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (21.21s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.975816ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kkzmv" [39af6ffd-8c53-49cd-9630-cf1724428289] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.009919518s
addons_test.go:473: (dbg) Run:  kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.562859664s)
addons_test.go:478: kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.642608496s)
addons_test.go:478: kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-444600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.471070731s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (21.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (72.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.751467ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-444600 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-444600 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7460116a-293a-486e-a016-aec7d4400772] Pending
helpers_test.go:344: "task-pv-pod" [7460116a-293a-486e-a016-aec7d4400772] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7460116a-293a-486e-a016-aec7d4400772] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007101855s
addons_test.go:584: (dbg) Run:  kubectl --context addons-444600 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-444600 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-444600 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-444600 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-444600 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6a581be7-0262-4de9-b9d7-a0788333c6dd] Pending
helpers_test.go:344: "task-pv-pod-restore" [6a581be7-0262-4de9-b9d7-a0788333c6dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6a581be7-0262-4de9-b9d7-a0788333c6dd] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006444553s
addons_test.go:626: (dbg) Run:  kubectl --context addons-444600 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-444600 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-444600 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-444600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.806474187s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (72.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-kpd5g" [4dbd1494-c837-4f1f-b1a9-38a421d6194b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004407065s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-444600
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-444600 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-444600 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5b497734-4f9c-4c32-9c65-b2466fbae6d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5b497734-4f9c-4c32-9c65-b2466fbae6d3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5b497734-4f9c-4c32-9c65-b2466fbae6d3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004592919s
addons_test.go:891: (dbg) Run:  kubectl --context addons-444600 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 ssh "cat /opt/local-path-provisioner/pvc-f4c838b1-fb83-40e9-a104-024de96ac353_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-444600 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-444600 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-444600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-444600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.716965298s)
--- PASS: TestAddons/parallel/LocalPath (56.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2tfw7" [fe8a7fc9-87ff-4886-b717-8fea5a316403] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005970802s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-444600
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-5s9nq" [2b3218b3-381c-4408-9f4f-fef3fc492c1a] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00538939s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-444600 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-444600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-444600
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-444600: (1m32.229169192s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444600
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444600
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-444600
--- PASS: TestAddons/StoppedEnableDisable (92.53s)

                                                
                                    
x
+
TestCertOptions (76.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-288009 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-288009 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m15.128295084s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-288009 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-288009 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-288009 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-288009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-288009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-288009: (1.044184136s)
--- PASS: TestCertOptions (76.67s)

                                                
                                    
x
+
TestCertExpiration (259.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285902 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285902 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m12.635715009s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285902 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285902 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (6.000275787s)
helpers_test.go:175: Cleaning up "cert-expiration-285902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-285902
--- PASS: TestCertExpiration (259.49s)

                                                
                                    
x
+
TestForceSystemdFlag (91.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-977603 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-977603 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m30.551388715s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-977603 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-977603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-977603
--- PASS: TestForceSystemdFlag (91.59s)

                                                
                                    
x
+
TestForceSystemdEnv (62.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-102780 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-102780 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m1.487881234s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-102780 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-102780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-102780
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-102780: (1.068881829s)
--- PASS: TestForceSystemdEnv (62.79s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.97s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.97s)

                                                
                                    
x
+
TestErrorSpam/setup (46.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-649854 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-649854 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-649854 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-649854 --driver=kvm2  --container-runtime=containerd: (46.492191589s)
--- PASS: TestErrorSpam/setup (46.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 stop: (2.094287103s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649854 --log_dir /tmp/nospam-649854 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18007-4431/.minikube/files/etc/test/nested/copy/11635/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0130 19:33:43.142368   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.148135   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.158401   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.178671   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.218959   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.299372   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.459791   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:43.780343   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:44.421376   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:45.701843   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:33:48.263590   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-395377 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m11.440304596s)
--- PASS: TestFunctional/serial/StartWithProxy (71.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --alsologtostderr -v=8
E0130 19:33:53.384495   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-395377 --alsologtostderr -v=8: (5.514681069s)
functional_test.go:659: soft start took 5.51526225s for "functional-395377" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-395377 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:3.1: (1.133774755s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:3.3: (1.201306335s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 cache add registry.k8s.io/pause:latest: (1.129819394s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.681487ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cache reload
E0130 19:34:03.624960   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 cache reload: (1.138439174s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 kubectl -- --context functional-395377 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-395377 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0130 19:34:24.106493   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-395377 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.722273959s)
functional_test.go:757: restart took 42.722417469s for "functional-395377" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-395377 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 logs: (1.471969813s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 logs --file /tmp/TestFunctionalserialLogsFileCmd3843148121/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 logs --file /tmp/TestFunctionalserialLogsFileCmd3843148121/001/logs.txt: (1.501072519s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-395377 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-395377
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-395377: exit status 115 (297.214427ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.119:32394 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-395377 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-395377 delete -f testdata/invalidsvc.yaml: (1.00557268s)
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 config get cpus: exit status 14 (85.619993ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 config get cpus: exit status 14 (64.919693ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-395377 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-395377 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19830: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-395377 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (171.149476ms)

                                                
                                                
-- stdout --
	* [functional-395377] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:34:57.965352   18301 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:34:57.965471   18301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:34:57.965480   18301 out.go:309] Setting ErrFile to fd 2...
	I0130 19:34:57.965485   18301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:34:57.965695   18301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:34:57.966231   18301 out.go:303] Setting JSON to false
	I0130 19:34:57.967210   18301 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1042,"bootTime":1706642256,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:34:57.967271   18301 start.go:138] virtualization: kvm guest
	I0130 19:34:57.969483   18301 out.go:177] * [functional-395377] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 19:34:57.971064   18301 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:34:57.971004   18301 notify.go:220] Checking for updates...
	I0130 19:34:57.972382   18301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:34:57.973974   18301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:34:57.975308   18301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:34:57.976838   18301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:34:57.978485   18301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:34:57.980880   18301 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:34:57.981307   18301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:34:57.981374   18301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:34:57.997254   18301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0130 19:34:57.997709   18301 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:34:57.998321   18301 main.go:141] libmachine: Using API Version  1
	I0130 19:34:57.998344   18301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:34:57.998735   18301 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:34:57.998998   18301 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:34:57.999340   18301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:34:57.999716   18301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:34:57.999762   18301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:34:58.014880   18301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38319
	I0130 19:34:58.015295   18301 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:34:58.015711   18301 main.go:141] libmachine: Using API Version  1
	I0130 19:34:58.015736   18301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:34:58.016122   18301 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:34:58.016385   18301 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:34:58.055065   18301 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 19:34:58.056260   18301 start.go:298] selected driver: kvm2
	I0130 19:34:58.056273   18301 start.go:902] validating driver "kvm2" against &{Name:functional-395377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-395377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.119 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:34:58.056396   18301 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:34:58.060395   18301 out.go:177] 
	W0130 19:34:58.062313   18301 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0130 19:34:58.063686   18301 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-395377 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-395377 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (182.091207ms)

                                                
                                                
-- stdout --
	* [functional-395377] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:34:58.045791   18314 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:34:58.046747   18314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:34:58.046795   18314 out.go:309] Setting ErrFile to fd 2...
	I0130 19:34:58.046813   18314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:34:58.047399   18314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:34:58.048425   18314 out.go:303] Setting JSON to false
	I0130 19:34:58.049731   18314 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1042,"bootTime":1706642256,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 19:34:58.050169   18314 start.go:138] virtualization: kvm guest
	I0130 19:34:58.052568   18314 out.go:177] * [functional-395377] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0130 19:34:58.055073   18314 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 19:34:58.056344   18314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 19:34:58.054351   18314 notify.go:220] Checking for updates...
	I0130 19:34:58.060381   18314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 19:34:58.062352   18314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 19:34:58.064950   18314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 19:34:58.066263   18314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 19:34:58.068973   18314 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:34:58.069577   18314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:34:58.069646   18314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:34:58.088084   18314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0130 19:34:58.088522   18314 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:34:58.089211   18314 main.go:141] libmachine: Using API Version  1
	I0130 19:34:58.089237   18314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:34:58.089863   18314 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:34:58.090137   18314 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:34:58.090627   18314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 19:34:58.091259   18314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:34:58.091299   18314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:34:58.109618   18314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0130 19:34:58.110070   18314 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:34:58.110547   18314 main.go:141] libmachine: Using API Version  1
	I0130 19:34:58.110570   18314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:34:58.110929   18314 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:34:58.111115   18314 main.go:141] libmachine: (functional-395377) Calling .DriverName
	I0130 19:34:58.154034   18314 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0130 19:34:58.155415   18314 start.go:298] selected driver: kvm2
	I0130 19:34:58.155428   18314 start.go:902] validating driver "kvm2" against &{Name:functional-395377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-395377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.119 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 19:34:58.155536   18314 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 19:34:58.157473   18314 out.go:177] 
	W0130 19:34:58.158636   18314 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0130 19:34:58.159853   18314 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-395377 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-395377 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8cpb8" [dbc6e328-2122-4f9f-9d12-57768cf591ab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8cpb8" [dbc6e328-2122-4f9f-9d12-57768cf591ab] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.007800819s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.119:30528
functional_test.go:1671: http://192.168.50.119:30528: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8cpb8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.119:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.119:30528
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3910198b-bc39-4ef2-bb51-e6145cfb4358] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.008874834s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-395377 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-395377 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-395377 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-395377 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-395377 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [54d77531-a4b5-43eb-92dc-6c2dbd265162] Pending
helpers_test.go:344: "sp-pod" [54d77531-a4b5-43eb-92dc-6c2dbd265162] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [54d77531-a4b5-43eb-92dc-6c2dbd265162] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004585016s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-395377 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-395377 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-395377 delete -f testdata/storage-provisioner/pod.yaml: (1.699673279s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-395377 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1f98cb80-2b2d-4196-9423-c5ff45680ccb] Pending
helpers_test.go:344: "sp-pod" [1f98cb80-2b2d-4196-9423-c5ff45680ccb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1f98cb80-2b2d-4196-9423-c5ff45680ccb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004826961s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-395377 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh -n functional-395377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cp functional-395377:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3304713373/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh -n functional-395377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh -n functional-395377 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-395377 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ztbdg" [3bfd2e6e-4471-495b-bc48-7e481ca4b7f7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ztbdg" [3bfd2e6e-4471-495b-bc48-7e481ca4b7f7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.014241479s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;": exit status 1 (264.871908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;": exit status 1 (194.358711ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;": exit status 1 (723.617545ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;": exit status 1 (148.784949ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-395377 exec mysql-859648c796-ztbdg -- mysql -ppassword -e "show databases;"
2024/01/30 19:35:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (33.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11635/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /etc/test/nested/copy/11635/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11635.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /etc/ssl/certs/11635.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11635.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /usr/share/ca-certificates/11635.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/116352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /etc/ssl/certs/116352.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/116352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /usr/share/ca-certificates/116352.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-395377 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "sudo systemctl is-active docker": exit status 1 (228.728244ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "sudo systemctl is-active crio": exit status 1 (215.96258ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-395377 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-395377 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-btc89" [6b90b5be-c6a3-4aa8-8633-55354bf82f1f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-btc89" [6b90b5be-c6a3-4aa8-8633-55354bf82f1f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005001966s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "293.97459ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "84.313759ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdany-port2548449051/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706643297437076936" to /tmp/TestFunctionalparallelMountCmdany-port2548449051/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706643297437076936" to /tmp/TestFunctionalparallelMountCmdany-port2548449051/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706643297437076936" to /tmp/TestFunctionalparallelMountCmdany-port2548449051/001/test-1706643297437076936
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.072402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 30 19:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 30 19:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 30 19:34 test-1706643297437076936
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh cat /mount-9p/test-1706643297437076936
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-395377 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f9d379a6-c748-4089-b1a6-cc12d64b0691] Pending
helpers_test.go:344: "busybox-mount" [f9d379a6-c748-4089-b1a6-cc12d64b0691] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f9d379a6-c748-4089-b1a6-cc12d64b0691] Running
helpers_test.go:344: "busybox-mount" [f9d379a6-c748-4089-b1a6-cc12d64b0691] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0130 19:35:05.066815   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [f9d379a6-c748-4089-b1a6-cc12d64b0691] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005704553s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-395377 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdany-port2548449051/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "240.350161ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "64.355815ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-395377 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-395377 image ls --format short --alsologtostderr:
I0130 19:35:20.864356   19948 out.go:296] Setting OutFile to fd 1 ...
I0130 19:35:20.864688   19948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:20.864716   19948 out.go:309] Setting ErrFile to fd 2...
I0130 19:35:20.864727   19948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:20.865207   19948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
I0130 19:35:20.865793   19948 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:20.865931   19948 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:20.866376   19948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:20.866443   19948 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:20.880683   19948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
I0130 19:35:20.881134   19948 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:20.881629   19948 main.go:141] libmachine: Using API Version  1
I0130 19:35:20.881654   19948 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:20.882181   19948 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:20.882351   19948 main.go:141] libmachine: (functional-395377) Calling .GetState
I0130 19:35:20.884413   19948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:20.884464   19948 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:20.898266   19948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36599
I0130 19:35:20.898739   19948 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:20.899209   19948 main.go:141] libmachine: Using API Version  1
I0130 19:35:20.899238   19948 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:20.899598   19948 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:20.899779   19948 main.go:141] libmachine: (functional-395377) Calling .DriverName
I0130 19:35:20.900009   19948 ssh_runner.go:195] Run: systemctl --version
I0130 19:35:20.900040   19948 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
I0130 19:35:20.902907   19948 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:20.903236   19948 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
I0130 19:35:20.903267   19948 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:20.903468   19948 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
I0130 19:35:20.903632   19948 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
I0130 19:35:20.903817   19948 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
I0130 19:35:20.903939   19948 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
I0130 19:35:21.036748   19948 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:35:21.141986   19948 main.go:141] libmachine: Making call to close driver server
I0130 19:35:21.141997   19948 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:21.142285   19948 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:21.142303   19948 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:21.142315   19948 main.go:141] libmachine: Making call to close driver server
I0130 19:35:21.142329   19948 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:21.142527   19948 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:21.142605   19948 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:21.142640   19948 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-395377 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| localhost/my-image                      | functional-395377  | sha256:4390a2 | 775kB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                   | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/echoserver              | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | sha256:83f6cc | 24.6MB |
| docker.io/library/nginx                 | latest             | sha256:a87587 | 70.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                   | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                   | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | sha256:e3db31 | 18.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-395377 image ls --format table --alsologtostderr:
I0130 19:35:27.159351   20129 out.go:296] Setting OutFile to fd 1 ...
I0130 19:35:27.159600   20129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:27.159608   20129 out.go:309] Setting ErrFile to fd 2...
I0130 19:35:27.159614   20129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:27.159807   20129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
I0130 19:35:27.160441   20129 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:27.160550   20129 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:27.160917   20129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:27.160959   20129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:27.175016   20129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
I0130 19:35:27.175465   20129 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:27.176080   20129 main.go:141] libmachine: Using API Version  1
I0130 19:35:27.176112   20129 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:27.176470   20129 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:27.176654   20129 main.go:141] libmachine: (functional-395377) Calling .GetState
I0130 19:35:27.178613   20129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:27.178663   20129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:27.193522   20129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
I0130 19:35:27.193973   20129 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:27.194523   20129 main.go:141] libmachine: Using API Version  1
I0130 19:35:27.194550   20129 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:27.194913   20129 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:27.195114   20129 main.go:141] libmachine: (functional-395377) Calling .DriverName
I0130 19:35:27.195336   20129 ssh_runner.go:195] Run: systemctl --version
I0130 19:35:27.195365   20129 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
I0130 19:35:27.197909   20129 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:27.198278   20129 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
I0130 19:35:27.198320   20129 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:27.198422   20129 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
I0130 19:35:27.198571   20129 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
I0130 19:35:27.198719   20129 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
I0130 19:35:27.198845   20129 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
I0130 19:35:27.292685   20129 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:35:27.380791   20129 main.go:141] libmachine: Making call to close driver server
I0130 19:35:27.380812   20129 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:27.381104   20129 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:27.381158   20129 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:27.381169   20129 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:27.381184   20129 main.go:141] libmachine: Making call to close driver server
I0130 19:35:27.381197   20129 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:27.381398   20129 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:27.381428   20129 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:27.381436   20129 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-395377 image ls --format json --alsologtostderr:
[{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:82e4c8a736a4fcf22
b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["do
cker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e0
9b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:4390a2efb3dfa3b0b466c9015fe3c2fba263fa8216010ef0266f78c1ac0e958f","repoDigests":[],"repoTags":["localhost/my-image:functional-395377"],"size":"774904"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-395377 image ls --format json --alsologtostderr:
I0130 19:35:26.899534   20106 out.go:296] Setting OutFile to fd 1 ...
I0130 19:35:26.899662   20106 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:26.899673   20106 out.go:309] Setting ErrFile to fd 2...
I0130 19:35:26.899680   20106 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:26.899856   20106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
I0130 19:35:26.900439   20106 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:26.900535   20106 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:26.900867   20106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:26.900920   20106 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:26.915437   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34167
I0130 19:35:26.915907   20106 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:26.916473   20106 main.go:141] libmachine: Using API Version  1
I0130 19:35:26.916500   20106 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:26.916837   20106 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:26.917101   20106 main.go:141] libmachine: (functional-395377) Calling .GetState
I0130 19:35:26.918864   20106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:26.918908   20106 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:26.933162   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
I0130 19:35:26.933635   20106 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:26.934095   20106 main.go:141] libmachine: Using API Version  1
I0130 19:35:26.934119   20106 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:26.934489   20106 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:26.934735   20106 main.go:141] libmachine: (functional-395377) Calling .DriverName
I0130 19:35:26.934945   20106 ssh_runner.go:195] Run: systemctl --version
I0130 19:35:26.934969   20106 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
I0130 19:35:26.938062   20106 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:26.938499   20106 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
I0130 19:35:26.938542   20106 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:26.938639   20106 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
I0130 19:35:26.938828   20106 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
I0130 19:35:26.938962   20106 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
I0130 19:35:26.939085   20106 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
I0130 19:35:27.029199   20106 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:35:27.096225   20106 main.go:141] libmachine: Making call to close driver server
I0130 19:35:27.096242   20106 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:27.096531   20106 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:27.096576   20106 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:27.096591   20106 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:27.096597   20106 main.go:141] libmachine: Making call to close driver server
I0130 19:35:27.096607   20106 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:27.096831   20106 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:27.096874   20106 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:27.096892   20106 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-395377 image ls --format yaml --alsologtostderr:
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-395377 image ls --format yaml --alsologtostderr:
I0130 19:35:21.227533   19983 out.go:296] Setting OutFile to fd 1 ...
I0130 19:35:21.227632   19983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:21.227636   19983 out.go:309] Setting ErrFile to fd 2...
I0130 19:35:21.227641   19983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:21.227822   19983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
I0130 19:35:21.228428   19983 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:21.228533   19983 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:21.228903   19983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:21.228944   19983 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:21.243644   19983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
I0130 19:35:21.244132   19983 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:21.244735   19983 main.go:141] libmachine: Using API Version  1
I0130 19:35:21.244767   19983 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:21.245114   19983 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:21.245298   19983 main.go:141] libmachine: (functional-395377) Calling .GetState
I0130 19:35:21.247103   19983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:21.247142   19983 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:21.260788   19983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
I0130 19:35:21.261268   19983 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:21.261668   19983 main.go:141] libmachine: Using API Version  1
I0130 19:35:21.261707   19983 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:21.262088   19983 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:21.262304   19983 main.go:141] libmachine: (functional-395377) Calling .DriverName
I0130 19:35:21.262527   19983 ssh_runner.go:195] Run: systemctl --version
I0130 19:35:21.262555   19983 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
I0130 19:35:21.265444   19983 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:21.265926   19983 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
I0130 19:35:21.265953   19983 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:21.266128   19983 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
I0130 19:35:21.266310   19983 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
I0130 19:35:21.266464   19983 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
I0130 19:35:21.266604   19983 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
I0130 19:35:21.358705   19983 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 19:35:21.401241   19983 main.go:141] libmachine: Making call to close driver server
I0130 19:35:21.401254   19983 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:21.401555   19983 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:21.401587   19983 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:21.401587   19983 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:21.401597   19983 main.go:141] libmachine: Making call to close driver server
I0130 19:35:21.401607   19983 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:21.401884   19983 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:21.401890   19983 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:21.401928   19983 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh pgrep buildkitd: exit status 1 (240.635694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image build -t localhost/my-image:functional-395377 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-395377 image build -t localhost/my-image:functional-395377 testdata/build --alsologtostderr: (4.931569837s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-395377 image build -t localhost/my-image:functional-395377 testdata/build --alsologtostderr:
I0130 19:35:21.711681   20038 out.go:296] Setting OutFile to fd 1 ...
I0130 19:35:21.711853   20038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:21.711868   20038 out.go:309] Setting ErrFile to fd 2...
I0130 19:35:21.711873   20038 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 19:35:21.712080   20038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
I0130 19:35:21.712704   20038 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:21.713247   20038 config.go:182] Loaded profile config "functional-395377": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0130 19:35:21.713706   20038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:21.713781   20038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:21.729257   20038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
I0130 19:35:21.729807   20038 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:21.730363   20038 main.go:141] libmachine: Using API Version  1
I0130 19:35:21.730392   20038 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:21.730750   20038 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:21.730968   20038 main.go:141] libmachine: (functional-395377) Calling .GetState
I0130 19:35:21.732727   20038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0130 19:35:21.732802   20038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 19:35:21.746590   20038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
I0130 19:35:21.747019   20038 main.go:141] libmachine: () Calling .GetVersion
I0130 19:35:21.747510   20038 main.go:141] libmachine: Using API Version  1
I0130 19:35:21.747536   20038 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 19:35:21.747879   20038 main.go:141] libmachine: () Calling .GetMachineName
I0130 19:35:21.748062   20038 main.go:141] libmachine: (functional-395377) Calling .DriverName
I0130 19:35:21.748253   20038 ssh_runner.go:195] Run: systemctl --version
I0130 19:35:21.748280   20038 main.go:141] libmachine: (functional-395377) Calling .GetSSHHostname
I0130 19:35:21.750625   20038 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:21.750962   20038 main.go:141] libmachine: (functional-395377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:bb:41", ip: ""} in network mk-functional-395377: {Iface:virbr1 ExpiryTime:2024-01-30 20:32:55 +0000 UTC Type:0 Mac:52:54:00:aa:bb:41 Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:functional-395377 Clientid:01:52:54:00:aa:bb:41}
I0130 19:35:21.751000   20038 main.go:141] libmachine: (functional-395377) DBG | domain functional-395377 has defined IP address 192.168.50.119 and MAC address 52:54:00:aa:bb:41 in network mk-functional-395377
I0130 19:35:21.751090   20038 main.go:141] libmachine: (functional-395377) Calling .GetSSHPort
I0130 19:35:21.751266   20038 main.go:141] libmachine: (functional-395377) Calling .GetSSHKeyPath
I0130 19:35:21.751450   20038 main.go:141] libmachine: (functional-395377) Calling .GetSSHUsername
I0130 19:35:21.751609   20038 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/functional-395377/id_rsa Username:docker}
I0130 19:35:21.850831   20038 build_images.go:151] Building image from path: /tmp/build.2428209301.tar
I0130 19:35:21.850944   20038 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0130 19:35:21.861922   20038 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2428209301.tar
I0130 19:35:21.866732   20038 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2428209301.tar: stat -c "%s %y" /var/lib/minikube/build/build.2428209301.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2428209301.tar': No such file or directory
I0130 19:35:21.866759   20038 ssh_runner.go:362] scp /tmp/build.2428209301.tar --> /var/lib/minikube/build/build.2428209301.tar (3072 bytes)
I0130 19:35:21.891900   20038 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2428209301
I0130 19:35:21.908783   20038 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2428209301 -xf /var/lib/minikube/build/build.2428209301.tar
I0130 19:35:21.917931   20038 containerd.go:379] Building image: /var/lib/minikube/build/build.2428209301
I0130 19:35:21.917995   20038 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2428209301 --local dockerfile=/var/lib/minikube/build/build.2428209301 --output type=image,name=localhost/my-image:functional-395377
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:0c13c859406aae9e94ebcca30be33710f2fbb49b3c8536b06d7ff46b30abf12d 0.0s done
#8 exporting config sha256:4390a2efb3dfa3b0b466c9015fe3c2fba263fa8216010ef0266f78c1ac0e958f 0.0s done
#8 naming to localhost/my-image:functional-395377 done
#8 DONE 0.2s
I0130 19:35:26.554425   20038 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2428209301 --local dockerfile=/var/lib/minikube/build/build.2428209301 --output type=image,name=localhost/my-image:functional-395377: (4.636379546s)
I0130 19:35:26.554517   20038 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2428209301
I0130 19:35:26.568682   20038 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2428209301.tar
I0130 19:35:26.581259   20038 build_images.go:207] Built localhost/my-image:functional-395377 from /tmp/build.2428209301.tar
I0130 19:35:26.581295   20038 build_images.go:123] succeeded building to: functional-395377
I0130 19:35:26.581300   20038 build_images.go:124] failed building to: 
I0130 19:35:26.581324   20038 main.go:141] libmachine: Making call to close driver server
I0130 19:35:26.581333   20038 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:26.581608   20038 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:26.581630   20038 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:26.581657   20038 main.go:141] libmachine: Making call to close driver server
I0130 19:35:26.581656   20038 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
I0130 19:35:26.581667   20038 main.go:141] libmachine: (functional-395377) Calling .Close
I0130 19:35:26.581951   20038 main.go:141] libmachine: Successfully made call to close driver server
I0130 19:35:26.581988   20038 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 19:35:26.581989   20038 main.go:141] libmachine: (functional-395377) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.144111235s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-395377
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdspecific-port1487257330/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (232.102207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdspecific-port1487257330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "sudo umount -f /mount-9p": exit status 1 (273.60642ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-395377 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdspecific-port1487257330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image rm gcr.io/google-containers/addon-resizer:functional-395377 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service list -o json
functional_test.go:1490: Took "408.252858ms" to run "out/minikube-linux-amd64 -p functional-395377 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.119:31976
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T" /mount1: exit status 1 (371.848423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-395377 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-395377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3151060775/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.119:31976
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-395377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-395377
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-395377
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-395377
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (85.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-397745 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0130 19:36:26.988057   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-397745 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m25.338311824s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (85.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons enable ingress --alsologtostderr -v=5: (12.017229388s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (29.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-397745 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-397745 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.161180238s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-397745 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-397745 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [349035f7-939e-4385-b0f1-e13005e7dd54] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [349035f7-939e-4385-b0f1-e13005e7dd54] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003769295s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-397745 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.112
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons disable ingress-dns --alsologtostderr -v=1: (1.985704333s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-397745 addons disable ingress --alsologtostderr -v=1: (7.60634229s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (29.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-743353 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0130 19:38:43.140423   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:39:10.828385   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-743353 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.056301487s)
--- PASS: TestJSONOutput/start/Command (100.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-743353 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-743353 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-743353 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-743353 --output=json --user=testUser: (2.092220063s)
--- PASS: TestJSONOutput/stop/Command (2.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-588172 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-588172 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.476355ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0c46268-1e45-4900-b9d7-0bc735b715de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-588172] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19c09e4f-bbcb-499a-a709-6083fb039a42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18007"}}
	{"specversion":"1.0","id":"61d429cf-f1fa-4b23-873b-967c7848718e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e6bfa7a9-a672-4dc2-ba65-9eb99be28d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig"}}
	{"specversion":"1.0","id":"70ab9e4a-1e88-46ec-a05d-231e2c03c8cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube"}}
	{"specversion":"1.0","id":"7c4452f3-f03a-45fa-86d6-d3580eeaead2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"59621063-1a41-4a72-949e-d0d9bbcc386a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77dfd8da-bc84-461b-a9e9-50c6fac85c3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-588172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-588172
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-961563 --driver=kvm2  --container-runtime=containerd
E0130 19:39:56.596370   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.601634   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.611883   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.632145   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.672456   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.752801   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:56.913261   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:57.233837   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:57.874859   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:39:59.155365   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:40:01.716277   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:40:06.836952   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:40:17.077373   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-961563 --driver=kvm2  --container-runtime=containerd: (47.511204586s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-964003 --driver=kvm2  --container-runtime=containerd
E0130 19:40:37.557579   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:41:18.518777   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-964003 --driver=kvm2  --container-runtime=containerd: (47.575249201s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-961563
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-964003
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-964003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-964003
helpers_test.go:175: Cleaning up "first-961563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-961563
--- PASS: TestMinikubeProfile (97.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-787438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-787438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.19906647s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-787438 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-787438 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-806060 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-806060 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.663787211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-787438 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.13s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-806060
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-806060: (1.1338321s)
--- PASS: TestMountStart/serial/Stop (1.13s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-806060
E0130 19:42:33.660407   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.665698   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.676014   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.696309   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.736604   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.816922   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:33.977327   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:34.297970   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:34.938957   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:36.219452   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:38.781322   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:42:40.439058   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:42:43.902226   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-806060: (23.570876229s)
--- PASS: TestMountStart/serial/RestartStopped (24.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806060 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (173.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009944 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0130 19:42:54.142754   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:43:14.623792   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:43:43.140254   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:43:55.584031   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:44:56.596546   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:45:17.504275   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:45:24.279918   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009944 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m52.922437912s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (173.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-009944 -- rollout status deployment/busybox: (3.72479741s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-6rpkj -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-9bmvn -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-6rpkj -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-9bmvn -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-6rpkj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-9bmvn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-6rpkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-6rpkj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-9bmvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009944 -- exec busybox-5b5d89c9d6-9bmvn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-009944 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-009944 -v 3 --alsologtostderr: (42.155630798s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-009944 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp testdata/cp-test.txt multinode-009944:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3343422686/001/cp-test_multinode-009944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944:/home/docker/cp-test.txt multinode-009944-m02:/home/docker/cp-test_multinode-009944_multinode-009944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test_multinode-009944_multinode-009944-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944:/home/docker/cp-test.txt multinode-009944-m03:/home/docker/cp-test_multinode-009944_multinode-009944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test_multinode-009944_multinode-009944-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp testdata/cp-test.txt multinode-009944-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3343422686/001/cp-test_multinode-009944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m02:/home/docker/cp-test.txt multinode-009944:/home/docker/cp-test_multinode-009944-m02_multinode-009944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test_multinode-009944-m02_multinode-009944.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m02:/home/docker/cp-test.txt multinode-009944-m03:/home/docker/cp-test_multinode-009944-m02_multinode-009944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test_multinode-009944-m02_multinode-009944-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp testdata/cp-test.txt multinode-009944-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3343422686/001/cp-test_multinode-009944-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m03:/home/docker/cp-test.txt multinode-009944:/home/docker/cp-test_multinode-009944-m03_multinode-009944.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944 "sudo cat /home/docker/cp-test_multinode-009944-m03_multinode-009944.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 cp multinode-009944-m03:/home/docker/cp-test.txt multinode-009944-m02:/home/docker/cp-test_multinode-009944-m03_multinode-009944-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 ssh -n multinode-009944-m02 "sudo cat /home/docker/cp-test_multinode-009944-m03_multinode-009944-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-009944 node stop m03: (1.301241295s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009944 status: exit status 7 (443.643386ms)

                                                
                                                
-- stdout --
	multinode-009944
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009944-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009944-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr: exit status 7 (447.722568ms)

                                                
                                                
-- stdout --
	multinode-009944
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009944-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009944-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:46:44.858201   26748 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:46:44.858602   26748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:46:44.858617   26748 out.go:309] Setting ErrFile to fd 2...
	I0130 19:46:44.858626   26748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:46:44.859053   26748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:46:44.859331   26748 out.go:303] Setting JSON to false
	I0130 19:46:44.859362   26748 mustload.go:65] Loading cluster: multinode-009944
	I0130 19:46:44.859657   26748 notify.go:220] Checking for updates...
	I0130 19:46:44.860283   26748 config.go:182] Loaded profile config "multinode-009944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:46:44.860304   26748 status.go:255] checking status of multinode-009944 ...
	I0130 19:46:44.860750   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:44.860830   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:44.875447   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0130 19:46:44.875836   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:44.876372   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:44.876392   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:44.876788   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:44.876997   26748 main.go:141] libmachine: (multinode-009944) Calling .GetState
	I0130 19:46:44.878524   26748 status.go:330] multinode-009944 host status = "Running" (err=<nil>)
	I0130 19:46:44.878538   26748 host.go:66] Checking if "multinode-009944" exists ...
	I0130 19:46:44.878849   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:44.878888   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:44.892769   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0130 19:46:44.893132   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:44.893488   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:44.893510   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:44.893741   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:44.893922   26748 main.go:141] libmachine: (multinode-009944) Calling .GetIP
	I0130 19:46:44.896422   26748 main.go:141] libmachine: (multinode-009944) DBG | domain multinode-009944 has defined MAC address 52:54:00:84:06:3a in network mk-multinode-009944
	I0130 19:46:44.896776   26748 main.go:141] libmachine: (multinode-009944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:06:3a", ip: ""} in network mk-multinode-009944: {Iface:virbr1 ExpiryTime:2024-01-30 20:43:08 +0000 UTC Type:0 Mac:52:54:00:84:06:3a Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-009944 Clientid:01:52:54:00:84:06:3a}
	I0130 19:46:44.896819   26748 main.go:141] libmachine: (multinode-009944) DBG | domain multinode-009944 has defined IP address 192.168.39.120 and MAC address 52:54:00:84:06:3a in network mk-multinode-009944
	I0130 19:46:44.896935   26748 host.go:66] Checking if "multinode-009944" exists ...
	I0130 19:46:44.897243   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:44.897284   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:44.911463   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0130 19:46:44.911843   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:44.912283   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:44.912313   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:44.912591   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:44.912766   26748 main.go:141] libmachine: (multinode-009944) Calling .DriverName
	I0130 19:46:44.912967   26748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 19:46:44.912996   26748 main.go:141] libmachine: (multinode-009944) Calling .GetSSHHostname
	I0130 19:46:44.915544   26748 main.go:141] libmachine: (multinode-009944) DBG | domain multinode-009944 has defined MAC address 52:54:00:84:06:3a in network mk-multinode-009944
	I0130 19:46:44.915937   26748 main.go:141] libmachine: (multinode-009944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:06:3a", ip: ""} in network mk-multinode-009944: {Iface:virbr1 ExpiryTime:2024-01-30 20:43:08 +0000 UTC Type:0 Mac:52:54:00:84:06:3a Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-009944 Clientid:01:52:54:00:84:06:3a}
	I0130 19:46:44.915976   26748 main.go:141] libmachine: (multinode-009944) DBG | domain multinode-009944 has defined IP address 192.168.39.120 and MAC address 52:54:00:84:06:3a in network mk-multinode-009944
	I0130 19:46:44.916066   26748 main.go:141] libmachine: (multinode-009944) Calling .GetSSHPort
	I0130 19:46:44.916248   26748 main.go:141] libmachine: (multinode-009944) Calling .GetSSHKeyPath
	I0130 19:46:44.916383   26748 main.go:141] libmachine: (multinode-009944) Calling .GetSSHUsername
	I0130 19:46:44.916526   26748 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/multinode-009944/id_rsa Username:docker}
	I0130 19:46:45.011375   26748 ssh_runner.go:195] Run: systemctl --version
	I0130 19:46:45.017819   26748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:46:45.032401   26748 kubeconfig.go:92] found "multinode-009944" server: "https://192.168.39.120:8443"
	I0130 19:46:45.032427   26748 api_server.go:166] Checking apiserver status ...
	I0130 19:46:45.032456   26748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 19:46:45.044416   26748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	I0130 19:46:45.054156   26748 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod6adf31224ef4babc3355d042af802fec/1314fadbd9b1b94788aef2929a3537e66d90318f3bcecc766b768f1706fee39e"
	I0130 19:46:45.054225   26748 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6adf31224ef4babc3355d042af802fec/1314fadbd9b1b94788aef2929a3537e66d90318f3bcecc766b768f1706fee39e/freezer.state
	I0130 19:46:45.064318   26748 api_server.go:204] freezer state: "THAWED"
	I0130 19:46:45.064344   26748 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I0130 19:46:45.069812   26748 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I0130 19:46:45.069840   26748 status.go:421] multinode-009944 apiserver status = Running (err=<nil>)
	I0130 19:46:45.069848   26748 status.go:257] multinode-009944 status: &{Name:multinode-009944 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0130 19:46:45.069866   26748 status.go:255] checking status of multinode-009944-m02 ...
	I0130 19:46:45.070158   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:45.070194   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:45.084695   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
	I0130 19:46:45.085173   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:45.085717   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:45.085742   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:45.086106   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:45.086271   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetState
	I0130 19:46:45.087878   26748 status.go:330] multinode-009944-m02 host status = "Running" (err=<nil>)
	I0130 19:46:45.087894   26748 host.go:66] Checking if "multinode-009944-m02" exists ...
	I0130 19:46:45.088220   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:45.088252   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:45.103099   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0130 19:46:45.103496   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:45.103883   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:45.103904   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:45.104269   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:45.104411   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetIP
	I0130 19:46:45.106954   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | domain multinode-009944-m02 has defined MAC address 52:54:00:ab:3c:5e in network mk-multinode-009944
	I0130 19:46:45.107328   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:3c:5e", ip: ""} in network mk-multinode-009944: {Iface:virbr1 ExpiryTime:2024-01-30 20:44:13 +0000 UTC Type:0 Mac:52:54:00:ab:3c:5e Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-009944-m02 Clientid:01:52:54:00:ab:3c:5e}
	I0130 19:46:45.107363   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | domain multinode-009944-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:ab:3c:5e in network mk-multinode-009944
	I0130 19:46:45.107466   26748 host.go:66] Checking if "multinode-009944-m02" exists ...
	I0130 19:46:45.107791   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:45.107834   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:45.121866   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I0130 19:46:45.122224   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:45.122632   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:45.122651   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:45.122917   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:45.123093   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .DriverName
	I0130 19:46:45.123253   26748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 19:46:45.123276   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetSSHHostname
	I0130 19:46:45.125716   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | domain multinode-009944-m02 has defined MAC address 52:54:00:ab:3c:5e in network mk-multinode-009944
	I0130 19:46:45.126154   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:3c:5e", ip: ""} in network mk-multinode-009944: {Iface:virbr1 ExpiryTime:2024-01-30 20:44:13 +0000 UTC Type:0 Mac:52:54:00:ab:3c:5e Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-009944-m02 Clientid:01:52:54:00:ab:3c:5e}
	I0130 19:46:45.126197   26748 main.go:141] libmachine: (multinode-009944-m02) DBG | domain multinode-009944-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:ab:3c:5e in network mk-multinode-009944
	I0130 19:46:45.126324   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetSSHPort
	I0130 19:46:45.126473   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetSSHKeyPath
	I0130 19:46:45.126608   26748 main.go:141] libmachine: (multinode-009944-m02) Calling .GetSSHUsername
	I0130 19:46:45.126780   26748 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18007-4431/.minikube/machines/multinode-009944-m02/id_rsa Username:docker}
	I0130 19:46:45.219656   26748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 19:46:45.231693   26748 status.go:257] multinode-009944-m02 status: &{Name:multinode-009944-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0130 19:46:45.231723   26748 status.go:255] checking status of multinode-009944-m03 ...
	I0130 19:46:45.232015   26748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:46:45.232047   26748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:46:45.247429   26748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
	I0130 19:46:45.247810   26748 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:46:45.248277   26748 main.go:141] libmachine: Using API Version  1
	I0130 19:46:45.248301   26748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:46:45.248629   26748 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:46:45.248838   26748 main.go:141] libmachine: (multinode-009944-m03) Calling .GetState
	I0130 19:46:45.250525   26748 status.go:330] multinode-009944-m03 host status = "Stopped" (err=<nil>)
	I0130 19:46:45.250543   26748 status.go:343] host is not running, skipping remaining checks
	I0130 19:46:45.250548   26748 status.go:257] multinode-009944-m03 status: &{Name:multinode-009944-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-009944 node start m03 --alsologtostderr: (27.467864031s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (316.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009944
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-009944
E0130 19:47:33.660296   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:48:01.344448   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:48:43.142133   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:49:56.596402   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 19:50:06.190230   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-009944: (3m4.684351571s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009944 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009944 --wait=true -v=8 --alsologtostderr: (2m11.398815427s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009944
--- PASS: TestMultiNode/serial/RestartKeepsNodes (316.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-009944 node delete m03: (1.201629382s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 stop
E0130 19:52:33.660816   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:53:43.140346   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:54:56.597075   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-009944 stop: (3m3.512935075s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009944 status: exit status 7 (93.054318ms)

                                                
                                                
-- stdout --
	multinode-009944
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-009944-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr: exit status 7 (91.735937ms)

                                                
                                                
-- stdout --
	multinode-009944
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-009944-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 19:55:34.984112   28891 out.go:296] Setting OutFile to fd 1 ...
	I0130 19:55:34.984360   28891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:55:34.984368   28891 out.go:309] Setting ErrFile to fd 2...
	I0130 19:55:34.984373   28891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 19:55:34.984575   28891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 19:55:34.984733   28891 out.go:303] Setting JSON to false
	I0130 19:55:34.984756   28891 mustload.go:65] Loading cluster: multinode-009944
	I0130 19:55:34.984803   28891 notify.go:220] Checking for updates...
	I0130 19:55:34.985161   28891 config.go:182] Loaded profile config "multinode-009944": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 19:55:34.985174   28891 status.go:255] checking status of multinode-009944 ...
	I0130 19:55:34.985614   28891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:55:34.985668   28891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:55:35.000758   28891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0130 19:55:35.001195   28891 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:55:35.001810   28891 main.go:141] libmachine: Using API Version  1
	I0130 19:55:35.001830   28891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:55:35.002166   28891 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:55:35.002343   28891 main.go:141] libmachine: (multinode-009944) Calling .GetState
	I0130 19:55:35.003844   28891 status.go:330] multinode-009944 host status = "Stopped" (err=<nil>)
	I0130 19:55:35.003857   28891 status.go:343] host is not running, skipping remaining checks
	I0130 19:55:35.003865   28891 status.go:257] multinode-009944 status: &{Name:multinode-009944 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0130 19:55:35.003902   28891 status.go:255] checking status of multinode-009944-m02 ...
	I0130 19:55:35.004201   28891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0130 19:55:35.004236   28891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 19:55:35.018099   28891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0130 19:55:35.018507   28891 main.go:141] libmachine: () Calling .GetVersion
	I0130 19:55:35.018976   28891 main.go:141] libmachine: Using API Version  1
	I0130 19:55:35.018997   28891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 19:55:35.019266   28891 main.go:141] libmachine: () Calling .GetMachineName
	I0130 19:55:35.019426   28891 main.go:141] libmachine: (multinode-009944-m02) Calling .GetState
	I0130 19:55:35.020847   28891 status.go:330] multinode-009944-m02 host status = "Stopped" (err=<nil>)
	I0130 19:55:35.020863   28891 status.go:343] host is not running, skipping remaining checks
	I0130 19:55:35.020870   28891 status.go:257] multinode-009944-m02 status: &{Name:multinode-009944-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009944 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0130 19:56:19.640283   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009944 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m34.794473075s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009944 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009944
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009944-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-009944-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (73.530019ms)

                                                
                                                
-- stdout --
	* [multinode-009944-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-009944-m02' is duplicated with machine name 'multinode-009944-m02' in profile 'multinode-009944'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009944-m03 --driver=kvm2  --container-runtime=containerd
E0130 19:57:33.661042   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009944-m03 --driver=kvm2  --container-runtime=containerd: (50.35214698s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-009944
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-009944: exit status 80 (228.306077ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-009944
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-009944-m03 already exists in multinode-009944-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-009944-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.68s)

                                                
                                    
x
+
TestPreload (352.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-379968 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0130 19:58:43.140321   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
E0130 19:58:56.705633   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 19:59:56.596305   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-379968 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m3.365711146s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-379968 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-379968 image pull gcr.io/k8s-minikube/busybox: (2.459093942s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-379968
E0130 20:02:33.660322   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-379968: (1m31.51708345s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-379968 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0130 20:03:43.140358   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-379968 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m13.705266444s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-379968 image list
helpers_test.go:175: Cleaning up "test-preload-379968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-379968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-379968: (1.047825283s)
--- PASS: TestPreload (352.34s)

                                                
                                    
x
+
TestScheduledStopUnix (116.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-059851 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-059851 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.237157505s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059851 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-059851 -n scheduled-stop-059851
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059851 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059851 --cancel-scheduled
E0130 20:04:56.596215   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059851 -n scheduled-stop-059851
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059851
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-059851 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-059851
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-059851: exit status 7 (74.835143ms)

                                                
                                                
-- stdout --
	scheduled-stop-059851
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059851 -n scheduled-stop-059851
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-059851 -n scheduled-stop-059851: exit status 7 (74.650517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-059851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-059851
--- PASS: TestScheduledStopUnix (116.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (251.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3539274056 start -p running-upgrade-637090 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0130 20:06:46.191210   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3539274056 start -p running-upgrade-637090 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m13.350620979s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-637090 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-637090 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m53.965852827s)
helpers_test.go:175: Cleaning up "running-upgrade-637090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-637090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-637090: (1.264163709s)
--- PASS: TestRunningBinaryUpgrade (251.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.589101379s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-963584
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-963584: (2.108293706s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-963584 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-963584 status --format={{.Host}}: exit status 7 (81.093099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.761998598s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-963584 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (117.49491ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-963584] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-963584
	    minikube start -p kubernetes-upgrade-963584 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9635842 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-963584 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963584 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (19.693970012s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-963584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-963584
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-963584: (1.778519859s)
--- PASS: TestKubernetesUpgrade (192.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (92.280606ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-649604] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-649604 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-649604 --driver=kvm2  --container-runtime=containerd: (1m38.162973245s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-649604 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (180.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1905861339 start -p stopped-upgrade-510680 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1905861339 start -p stopped-upgrade-510680 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m34.136918888s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1905861339 -p stopped-upgrade-510680 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1905861339 -p stopped-upgrade-510680 stop: (2.153509124s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-510680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0130 20:08:43.140085   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-510680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m24.059760105s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (180.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (74.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0130 20:07:33.660074   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m13.222148008s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-649604 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-649604 status -o json: exit status 2 (301.172913ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-649604","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-649604
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-649604: (1.130862087s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (74.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-649604 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (40.19587592s)
--- PASS: TestNoKubernetes/serial/Start (40.20s)

                                                
                                    
x
+
TestPause/serial/Start (85.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-033412 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-033412 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m25.06164131s)
--- PASS: TestPause/serial/Start (85.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-649604 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-649604 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.344789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.233890516s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-649604
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-649604: (1.117843128s)
--- PASS: TestNoKubernetes/serial/Stop (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-649604 --driver=kvm2  --container-runtime=containerd
E0130 20:09:56.595614   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-649604 --driver=kvm2  --container-runtime=containerd: (32.916729169s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-510680
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-510680: (1.145848858s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-255550 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-255550 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (111.784976ms)

                                                
                                                
-- stdout --
	* [false-255550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 20:10:08.955921   36246 out.go:296] Setting OutFile to fd 1 ...
	I0130 20:10:08.956099   36246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:10:08.956113   36246 out.go:309] Setting ErrFile to fd 2...
	I0130 20:10:08.956120   36246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 20:10:08.956415   36246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-4431/.minikube/bin
	I0130 20:10:08.957231   36246 out.go:303] Setting JSON to false
	I0130 20:10:08.958463   36246 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3153,"bootTime":1706642256,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 20:10:08.958546   36246 start.go:138] virtualization: kvm guest
	I0130 20:10:08.960753   36246 out.go:177] * [false-255550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 20:10:08.962130   36246 notify.go:220] Checking for updates...
	I0130 20:10:08.962156   36246 out.go:177]   - MINIKUBE_LOCATION=18007
	I0130 20:10:08.963448   36246 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 20:10:08.964718   36246 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-4431/kubeconfig
	I0130 20:10:08.965916   36246 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-4431/.minikube
	I0130 20:10:08.967115   36246 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 20:10:08.968375   36246 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 20:10:08.970201   36246 config.go:182] Loaded profile config "NoKubernetes-649604": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0130 20:10:08.970296   36246 config.go:182] Loaded profile config "force-systemd-env-102780": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 20:10:08.970376   36246 config.go:182] Loaded profile config "pause-033412": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0130 20:10:08.970443   36246 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 20:10:09.003277   36246 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 20:10:09.004573   36246 start.go:298] selected driver: kvm2
	I0130 20:10:09.004586   36246 start.go:902] validating driver "kvm2" against <nil>
	I0130 20:10:09.004595   36246 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 20:10:09.006481   36246 out.go:177] 
	W0130 20:10:09.007782   36246 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0130 20:10:09.009027   36246 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-255550 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-255550

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255550"

                                                
                                                
----------------------- debugLogs end: false-255550 [took: 3.315967205s] --------------------------------
helpers_test.go:175: Cleaning up "false-255550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-255550
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-649604 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-649604 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.717545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-033412 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-033412 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m6.550558643s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (66.57s)

                                                
                                    
x
+
TestPause/serial/Pause (1.46s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-033412 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-033412 --alsologtostderr -v=5: (1.457988857s)
--- PASS: TestPause/serial/Pause (1.46s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-033412 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-033412 --output=json --layout=cluster: exit status 2 (321.179181ms)

                                                
                                                
-- stdout --
	{"Name":"pause-033412","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-033412","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-033412 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-033412 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-033412 --alsologtostderr -v=5: (1.045209333s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-033412 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-430114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-430114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m37.252149238s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (202.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-914136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-914136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (3m22.36516797s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (202.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-083220 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0130 20:12:33.660626   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
E0130 20:12:59.640897   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 20:13:43.140105   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-083220 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m27.437105956s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-083220 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9291139f-e277-4b40-bda8-48f4f359bb30] Pending
helpers_test.go:344: "busybox" [9291139f-e277-4b40-bda8-48f4f359bb30] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9291139f-e277-4b40-bda8-48f4f359bb30] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004686313s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-083220 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-083220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-083220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.196306208s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-083220 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-083220 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-083220 --alsologtostderr -v=3: (1m31.756563225s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-430114 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3914b91a-a787-4b18-b091-a78b92fad9f7] Pending
helpers_test.go:344: "busybox" [3914b91a-a787-4b18-b091-a78b92fad9f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3914b91a-a787-4b18-b091-a78b92fad9f7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005183901s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-430114 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-430114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-430114 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-430114 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-430114 --alsologtostderr -v=3: (1m32.12503723s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-177460 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0130 20:14:56.595362   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-177460 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m2.267636022s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-914136 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa28b81c-61ec-4dbd-8c11-024bd6b43cf2] Pending
helpers_test.go:344: "busybox" [aa28b81c-61ec-4dbd-8c11-024bd6b43cf2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aa28b81c-61ec-4dbd-8c11-024bd6b43cf2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004731026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-914136 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-914136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-914136 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-914136 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-914136 --alsologtostderr -v=3: (1m31.581512419s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-083220 -n embed-certs-083220
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-083220 -n embed-certs-083220: exit status 7 (94.76554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-083220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (327.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-083220 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0130 20:15:36.706760   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-083220 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m27.494053821s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-083220 -n embed-certs-083220
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (327.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-177460 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [028d02c4-7a85-4e49-a9ec-43cd5ad0a3d1] Pending
helpers_test.go:344: "busybox" [028d02c4-7a85-4e49-a9ec-43cd5ad0a3d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [028d02c4-7a85-4e49-a9ec-43cd5ad0a3d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004482502s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-177460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-177460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-177460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.147586905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-177460 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-177460 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-177460 --alsologtostderr -v=3: (1m32.257831922s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-430114 -n old-k8s-version-430114
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-430114 -n old-k8s-version-430114: exit status 7 (78.964811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-430114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (93.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-430114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-430114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m33.050510108s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-430114 -n old-k8s-version-430114
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (93.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-914136 -n no-preload-914136
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-914136 -n no-preload-914136: exit status 7 (81.608645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-914136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (312.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-914136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-914136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m12.196737116s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-914136 -n no-preload-914136
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (312.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460: exit status 7 (83.336275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-177460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-177460 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0130 20:17:33.660200   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-177460 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m10.609841576s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vqq26" [0bd24704-89b1-4968-9dfa-cb3be067ed7b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vqq26" [0bd24704-89b1-4968-9dfa-cb3be067ed7b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004890516s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vqq26" [0bd24704-89b1-4968-9dfa-cb3be067ed7b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003821342s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-430114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-430114 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-430114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-430114 -n old-k8s-version-430114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-430114 -n old-k8s-version-430114: exit status 2 (279.031124ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-430114 -n old-k8s-version-430114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-430114 -n old-k8s-version-430114: exit status 2 (279.182767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-430114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-430114 -n old-k8s-version-430114
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-430114 -n old-k8s-version-430114
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-855816 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0130 20:18:43.140861   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-855816 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (59.406699979s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-855816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-855816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.676308648s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-855816 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-855816 --alsologtostderr -v=3: (2.114176383s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-855816 -n newest-cni-855816
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-855816 -n newest-cni-855816: exit status 7 (85.100756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-855816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-855816 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0130 20:19:19.596513   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.601800   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.612255   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.633330   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.673894   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.754634   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:19.915526   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:20.236636   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:20.876865   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:22.157674   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:24.718224   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:29.838853   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:19:40.079431   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-855816 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (46.790631233s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-855816 -n newest-cni-855816
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-855816 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-855816 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-855816 -n newest-cni-855816
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-855816 -n newest-cni-855816: exit status 2 (266.591458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-855816 -n newest-cni-855816
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-855816 -n newest-cni-855816: exit status 2 (278.121974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-855816 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-855816 -n newest-cni-855816
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-855816 -n newest-cni-855816
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0130 20:19:56.595414   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/functional-395377/client.crt: no such file or directory
E0130 20:20:00.560287   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
E0130 20:20:41.521378   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.736011702s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqnxf" [211a080d-0bfa-4c77-a011-404e7973bf48] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqnxf" [211a080d-0bfa-4c77-a011-404e7973bf48] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.007763528s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nqnxf" [211a080d-0bfa-4c77-a011-404e7973bf48] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004597958s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-083220 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-083220 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-083220 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-083220 -n embed-certs-083220
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-083220 -n embed-certs-083220: exit status 2 (270.265172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-083220 -n embed-certs-083220
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-083220 -n embed-certs-083220: exit status 2 (274.048927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-083220 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-083220 -n embed-certs-083220
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-083220 -n embed-certs-083220
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m13.335278873s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qxb8c" [1c54e5b5-2243-4bde-9b3d-55e0460c5349] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qxb8c" [1c54e5b5-2243-4bde-9b3d-55e0460c5349] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004067503s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (105.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0130 20:22:03.442193   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m45.892973255s)
--- PASS: TestNetworkPlugins/group/calico/Start (105.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c9rgv" [65ab7cf7-aecf-497f-bd1e-3bceb568e7a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005478203s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c9rgv" [65ab7cf7-aecf-497f-bd1e-3bceb568e7a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015398312s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-914136 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-914136 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-914136 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-914136 -n no-preload-914136
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-914136 -n no-preload-914136: exit status 2 (273.598037ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-914136 -n no-preload-914136
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-914136 -n no-preload-914136: exit status 2 (278.544176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-914136 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-914136 -n no-preload-914136
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-914136 -n no-preload-914136
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m28.621804937s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dmqml" [d6349378-befe-4a5c-b0f8-b4cccc3b6ae9] Running
E0130 20:22:33.660439   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/ingress-addon-legacy-397745/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006116742s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wtxcd" [c6bdc553-efa1-4b90-bb27-0c7e0ec5f378] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.024228326s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dmqml" [d6349378-befe-4a5c-b0f8-b4cccc3b6ae9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006606234s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-177460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-177460 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-177460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460: exit status 2 (394.331816ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460: exit status 2 (309.949819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-177460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-177460 -n default-k8s-diff-port-177460
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b79mf" [723f5a99-10d2-4db8-9c27-082d7069b221] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b79mf" [723f5a99-10d2-4db8-9c27-082d7069b221] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00403474s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (114.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m54.60035687s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (114.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (105.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0130 20:23:26.192223   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m45.737864662s)
--- PASS: TestNetworkPlugins/group/flannel/Start (105.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k7gms" [0c3b022f-02a7-4945-997b-c97d7635d8d9] Running
E0130 20:23:43.140456   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/addons-444600/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007191979s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-brdlp" [62409b1c-9516-452b-a130-986e5c360572] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-brdlp" [62409b1c-9516-452b-a130-986e5c360572] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.008659475s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tk8qm" [7e538604-6025-4549-b290-dd7970664b32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tk8qm" [7e538604-6025-4549-b290-dd7970664b32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.016044663s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0130 20:24:19.596583   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-255550 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m6.778453519s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2pcrg" [2b0ec77e-aed6-4539-9f14-32cf8e6c9a9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 20:24:47.282367   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/old-k8s-version-430114/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2pcrg" [2b0ec77e-aed6-4539-9f14-32cf8e6c9a9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009610383s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-99swz" [2e4eefb3-6b5d-4e7c-9f1c-75e0c055061d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005031015s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fstbp" [846fb15b-eca5-403b-b06e-42a9594edd80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fstbp" [846fb15b-eca5-403b-b06e-42a9594edd80] Running
E0130 20:25:18.742849   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
E0130 20:25:18.748120   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
E0130 20:25:18.758420   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
E0130 20:25:18.778727   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
E0130 20:25:18.819733   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
E0130 20:25:18.900099   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004492007s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-255550 exec deployment/netcat -- nslookup kubernetes.default
E0130 20:25:19.060634   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0130 20:25:19.381563   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-255550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-255550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wtcnb" [6855b4a6-aee3-4067-960a-f6603a327168] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 20:25:28.984534   11635 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-4431/.minikube/profiles/no-preload-914136/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wtcnb" [6855b4a6-aee3-4067-960a-f6603a327168] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004972092s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-255550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-255550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
250 TestStartStop/group/disable-driver-mounts 0.17
266 TestNetworkPlugins/group/kubenet 3.65
274 TestNetworkPlugins/group/cilium 4.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-212862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-212862
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-255550 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-255550

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255550"

                                                
                                                
----------------------- debugLogs end: kubenet-255550 [took: 3.479963305s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-255550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-255550
--- SKIP: TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-255550 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-255550" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-255550

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-255550" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255550"

                                                
                                                
----------------------- debugLogs end: cilium-255550 [took: 3.985336255s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-255550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-255550
--- SKIP: TestNetworkPlugins/group/cilium (4.16s)

                                                
                                    
Copied to clipboard