Test Report: KVM_Linux_containerd 17634

                    
                      6a47c51e356b14dff44e127278d7e2190d030982:2023-11-17:31915
                    
                

Test fail (2/306)

Order failed test Duration
34 TestAddons/parallel/Headlamp 3.27
52 TestErrorSpam/setup 63.83
x
+
TestAddons/parallel/Headlamp (3.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-875867 --alsologtostderr -v=1
addons_test.go:823: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-875867 --alsologtostderr -v=1: exit status 11 (385.475215ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:00:39.993911   18421 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:00:39.994216   18421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:00:39.994227   18421 out.go:309] Setting ErrFile to fd 2...
	I1117 16:00:39.994232   18421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:00:39.994472   18421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:00:39.994813   18421 mustload.go:65] Loading cluster: addons-875867
	I1117 16:00:39.995230   18421 config.go:182] Loaded profile config "addons-875867": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:00:39.995251   18421 addons.go:594] checking whether the cluster is paused
	I1117 16:00:39.995390   18421 config.go:182] Loaded profile config "addons-875867": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:00:39.995408   18421 host.go:66] Checking if "addons-875867" exists ...
	I1117 16:00:39.995915   18421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:00:39.995957   18421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:00:40.010182   18421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I1117 16:00:40.010655   18421 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:00:40.011340   18421 main.go:141] libmachine: Using API Version  1
	I1117 16:00:40.011376   18421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:00:40.011775   18421 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:00:40.011959   18421 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 16:00:40.013665   18421 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 16:00:40.013861   18421 ssh_runner.go:195] Run: systemctl --version
	I1117 16:00:40.013882   18421 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 16:00:40.019207   18421 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 16:00:40.019564   18421 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 16:00:40.019608   18421 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 16:00:40.019766   18421 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 16:00:40.019964   18421 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 16:00:40.020122   18421 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 16:00:40.020265   18421 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 16:00:40.129645   18421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1117 16:00:40.129735   18421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1117 16:00:40.209641   18421 cri.go:89] found id: "ab0da72e2a631aa2c829e767ee0ea71d421f7239b0627f0629a584b210ff11fe"
	I1117 16:00:40.209669   18421 cri.go:89] found id: "03466f20ff24275530f7c8d9da200257be7e81a86da2d7e212f5d186768ad422"
	I1117 16:00:40.209676   18421 cri.go:89] found id: "01394289a1abd9f87a89135ea0d5c4c8b2def564f69dfa280e4f008040c5b4d1"
	I1117 16:00:40.209681   18421 cri.go:89] found id: "e1f80b392230b7952c1d5f157c0f66dd0566ab05013840e844dc3e9b72218b28"
	I1117 16:00:40.209687   18421 cri.go:89] found id: "210cfbbeebe7b022800c11a93ab23b2ac9f76b6231c9df65e2a821d3d610d939"
	I1117 16:00:40.209693   18421 cri.go:89] found id: "4474d7c984ba4bed3d9f7ddb52395a97fdf73e970510b07a5f8c08d29f707e0c"
	I1117 16:00:40.209699   18421 cri.go:89] found id: "1b2f0c61f55c0dbb7a8eadee722093a930a692852f34def27c51a5caccca4c8d"
	I1117 16:00:40.209704   18421 cri.go:89] found id: "d656e538a35b36750585a66de70daeb779e03446e6d9a0a7f4a702f0c2dedb81"
	I1117 16:00:40.209721   18421 cri.go:89] found id: "f5708c716eaf7bc566920612c5182ad058108a7454774999bcb227a62df8db5a"
	I1117 16:00:40.209729   18421 cri.go:89] found id: "ad5d864d5ffda9c338ef73ee2e52ae0b8c74c046ee9ea1f7e5a6e9b1515fe801"
	I1117 16:00:40.209735   18421 cri.go:89] found id: "4713fd5c495e2fd1939e4fb40b44649ec662fcec7049e3890ed26118943ee10d"
	I1117 16:00:40.209739   18421 cri.go:89] found id: "8196943b8fa1047dba40d738834aed68ad22d2ce81ee0f2557d67907a7a7299e"
	I1117 16:00:40.209744   18421 cri.go:89] found id: "931b907b86b23a5d9beeb90916cf758d851dc06d9010ef558b2b129d6e2a03f1"
	I1117 16:00:40.209752   18421 cri.go:89] found id: "cbfe322e2603666fcd8943407559fc80d35458a0f5c299d3f663f72f3132f5aa"
	I1117 16:00:40.209757   18421 cri.go:89] found id: "596bfcb9ba124bbc7653a75dd20f7006fb025a8a05c351e659a3ac2629c17bfe"
	I1117 16:00:40.209763   18421 cri.go:89] found id: "32d1553ff2f9a3b0f8e234df29034f1808f1192811863e8e7a18d7ffc5e9797c"
	I1117 16:00:40.209768   18421 cri.go:89] found id: "286365e850bbdf350b9d76e7ac9760a0bcd890c8a558359fa6d4593791647dca"
	I1117 16:00:40.209785   18421 cri.go:89] found id: "664fabe8585b6d46e94c1858346e0b94e6c4635557de779f6158a6d85150b2f6"
	I1117 16:00:40.209791   18421 cri.go:89] found id: "ac43b07c435a45edacd7cb544e74bc8b2bf1ce72b83b9361616e1a8c130513bb"
	I1117 16:00:40.209796   18421 cri.go:89] found id: ""
	I1117 16:00:40.209859   18421 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1117 16:00:40.308899   18421 main.go:141] libmachine: Making call to close driver server
	I1117 16:00:40.309015   18421 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 16:00:40.309318   18421 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:00:40.309336   18421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:00:40.309357   18421 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 16:00:40.312654   18421 out.go:177] 
	W1117 16:00:40.314303   18421 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-17T16:00:40Z" level=error msg="stat /run/containerd/runc/k8s.io/b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-17T16:00:40Z" level=error msg="stat /run/containerd/runc/k8s.io/b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75: no such file or directory"
	
	W1117 16:00:40.314333   18421 out.go:239] * 
	* 
	W1117 16:00:40.316597   18421 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 16:00:40.318534   18421 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:825: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-875867 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-875867 -n addons-875867
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-875867 logs -n 25: (2.005286681s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:56 UTC |                     |
	|         | -p download-only-196672                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |                     |
	|         | -p download-only-196672                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC | 17 Nov 23 15:57 UTC |
	| delete  | -p download-only-196672                                                                     | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC | 17 Nov 23 15:57 UTC |
	| delete  | -p download-only-196672                                                                     | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC | 17 Nov 23 15:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-874673 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |                     |
	|         | binary-mirror-874673                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35083                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-874673                                                                     | binary-mirror-874673 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC | 17 Nov 23 15:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |                     |
	|         | addons-875867                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |                     |
	|         | addons-875867                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-875867 --wait=true                                                                | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC | 17 Nov 23 16:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-875867 addons                                                                        | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-875867 ssh cat                                                                       | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | /opt/local-path-provisioner/pvc-bb6db982-17de-4131-8663-4e9ae86f5bf3_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-875867 addons disable                                                                | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | -p addons-875867                                                                            |                      |         |         |                     |                     |
	| ip      | addons-875867 ip                                                                            | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	| addons  | addons-875867 addons disable                                                                | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-875867 addons disable                                                                | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC |                     |
	|         | addons-875867                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC | 17 Nov 23 16:00 UTC |
	|         | addons-875867                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-875867        | jenkins | v1.32.0 | 17 Nov 23 16:00 UTC |                     |
	|         | -p addons-875867                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 15:57:10
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:57:10.107328   16907 out.go:296] Setting OutFile to fd 1 ...
	I1117 15:57:10.107591   16907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:10.107600   16907 out.go:309] Setting ErrFile to fd 2...
	I1117 15:57:10.107604   16907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:10.107767   16907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 15:57:10.108501   16907 out.go:303] Setting JSON to false
	I1117 15:57:10.109323   16907 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2379,"bootTime":1700234251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 15:57:10.109381   16907 start.go:138] virtualization: kvm guest
	I1117 15:57:10.111708   16907 out.go:177] * [addons-875867] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 15:57:10.113372   16907 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 15:57:10.113411   16907 notify.go:220] Checking for updates...
	I1117 15:57:10.114745   16907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 15:57:10.116221   16907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 15:57:10.117668   16907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 15:57:10.118906   16907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 15:57:10.120287   16907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 15:57:10.121795   16907 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 15:57:10.154263   16907 out.go:177] * Using the kvm2 driver based on user configuration
	I1117 15:57:10.155656   16907 start.go:298] selected driver: kvm2
	I1117 15:57:10.155673   16907 start.go:902] validating driver "kvm2" against <nil>
	I1117 15:57:10.155687   16907 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 15:57:10.156613   16907 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:57:10.156682   16907 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9289/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 15:57:10.171479   16907 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 15:57:10.171558   16907 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1117 15:57:10.171775   16907 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 15:57:10.171820   16907 cni.go:84] Creating CNI manager for ""
	I1117 15:57:10.171832   16907 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1117 15:57:10.171841   16907 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 15:57:10.171857   16907 start_flags.go:323] config:
	{Name:addons-875867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-875867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:57:10.171990   16907 iso.go:125] acquiring lock: {Name:mkc7f4527225ecf65fe1f10414ae202f7d6a2f67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:57:10.174197   16907 out.go:177] * Starting control plane node addons-875867 in cluster addons-875867
	I1117 15:57:10.175778   16907 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1117 15:57:10.175812   16907 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1117 15:57:10.175820   16907 cache.go:56] Caching tarball of preloaded images
	I1117 15:57:10.175933   16907 preload.go:174] Found /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 15:57:10.175946   16907 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1117 15:57:10.176273   16907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/config.json ...
	I1117 15:57:10.176296   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/config.json: {Name:mk48834dc88022995430efb77050821d8da01be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:10.176445   16907 start.go:365] acquiring machines lock for addons-875867: {Name:mk7b423ab784fc0d9b18edc30123656afa266c93 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1117 15:57:10.176505   16907 start.go:369] acquired machines lock for "addons-875867" in 43.486µs
	I1117 15:57:10.176528   16907 start.go:93] Provisioning new machine with config: &{Name:addons-875867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-875867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1117 15:57:10.176605   16907 start.go:125] createHost starting for "" (driver="kvm2")
	I1117 15:57:10.178517   16907 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1117 15:57:10.178757   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:57:10.178814   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:57:10.192996   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I1117 15:57:10.193458   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:57:10.193965   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:57:10.193987   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:57:10.194329   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:57:10.194525   16907 main.go:141] libmachine: (addons-875867) Calling .GetMachineName
	I1117 15:57:10.194740   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:10.194865   16907 start.go:159] libmachine.API.Create for "addons-875867" (driver="kvm2")
	I1117 15:57:10.194897   16907 client.go:168] LocalClient.Create starting
	I1117 15:57:10.194945   16907 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem
	I1117 15:57:10.299994   16907 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/cert.pem
	I1117 15:57:10.533184   16907 main.go:141] libmachine: Running pre-create checks...
	I1117 15:57:10.533211   16907 main.go:141] libmachine: (addons-875867) Calling .PreCreateCheck
	I1117 15:57:10.533726   16907 main.go:141] libmachine: (addons-875867) Calling .GetConfigRaw
	I1117 15:57:10.534166   16907 main.go:141] libmachine: Creating machine...
	I1117 15:57:10.534189   16907 main.go:141] libmachine: (addons-875867) Calling .Create
	I1117 15:57:10.534357   16907 main.go:141] libmachine: (addons-875867) Creating KVM machine...
	I1117 15:57:10.535573   16907 main.go:141] libmachine: (addons-875867) DBG | found existing default KVM network
	I1117 15:57:10.536261   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:10.536126   16928 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I1117 15:57:10.541936   16907 main.go:141] libmachine: (addons-875867) DBG | trying to create private KVM network mk-addons-875867 192.168.39.0/24...
	I1117 15:57:10.613534   16907 main.go:141] libmachine: (addons-875867) DBG | private KVM network mk-addons-875867 192.168.39.0/24 created
	I1117 15:57:10.613562   16907 main.go:141] libmachine: (addons-875867) Setting up store path in /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867 ...
	I1117 15:57:10.613589   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:10.613520   16928 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 15:57:10.613603   16907 main.go:141] libmachine: (addons-875867) Building disk image from file:///home/jenkins/minikube-integration/17634-9289/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1117 15:57:10.613713   16907 main.go:141] libmachine: (addons-875867) Downloading /home/jenkins/minikube-integration/17634-9289/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17634-9289/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1117 15:57:10.825677   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:10.825569   16928 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa...
	I1117 15:57:11.190454   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:11.190316   16928 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/addons-875867.rawdisk...
	I1117 15:57:11.190482   16907 main.go:141] libmachine: (addons-875867) DBG | Writing magic tar header
	I1117 15:57:11.190496   16907 main.go:141] libmachine: (addons-875867) DBG | Writing SSH key tar header
	I1117 15:57:11.190509   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:11.190442   16928 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867 ...
	I1117 15:57:11.190571   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867
	I1117 15:57:11.190598   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9289/.minikube/machines
	I1117 15:57:11.190608   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 15:57:11.190617   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9289
	I1117 15:57:11.190624   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1117 15:57:11.190633   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867 (perms=drwx------)
	I1117 15:57:11.190663   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins/minikube-integration/17634-9289/.minikube/machines (perms=drwxr-xr-x)
	I1117 15:57:11.190677   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home/jenkins
	I1117 15:57:11.190689   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins/minikube-integration/17634-9289/.minikube (perms=drwxr-xr-x)
	I1117 15:57:11.190702   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins/minikube-integration/17634-9289 (perms=drwxrwxr-x)
	I1117 15:57:11.190712   16907 main.go:141] libmachine: (addons-875867) DBG | Checking permissions on dir: /home
	I1117 15:57:11.190721   16907 main.go:141] libmachine: (addons-875867) DBG | Skipping /home - not owner
	I1117 15:57:11.190731   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1117 15:57:11.190738   16907 main.go:141] libmachine: (addons-875867) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1117 15:57:11.190744   16907 main.go:141] libmachine: (addons-875867) Creating domain...
	I1117 15:57:11.192043   16907 main.go:141] libmachine: (addons-875867) define libvirt domain using xml: 
	I1117 15:57:11.192062   16907 main.go:141] libmachine: (addons-875867) <domain type='kvm'>
	I1117 15:57:11.192084   16907 main.go:141] libmachine: (addons-875867)   <name>addons-875867</name>
	I1117 15:57:11.192093   16907 main.go:141] libmachine: (addons-875867)   <memory unit='MiB'>4000</memory>
	I1117 15:57:11.192104   16907 main.go:141] libmachine: (addons-875867)   <vcpu>2</vcpu>
	I1117 15:57:11.192116   16907 main.go:141] libmachine: (addons-875867)   <features>
	I1117 15:57:11.192126   16907 main.go:141] libmachine: (addons-875867)     <acpi/>
	I1117 15:57:11.192150   16907 main.go:141] libmachine: (addons-875867)     <apic/>
	I1117 15:57:11.192162   16907 main.go:141] libmachine: (addons-875867)     <pae/>
	I1117 15:57:11.192176   16907 main.go:141] libmachine: (addons-875867)     
	I1117 15:57:11.192195   16907 main.go:141] libmachine: (addons-875867)   </features>
	I1117 15:57:11.192208   16907 main.go:141] libmachine: (addons-875867)   <cpu mode='host-passthrough'>
	I1117 15:57:11.192219   16907 main.go:141] libmachine: (addons-875867)   
	I1117 15:57:11.192230   16907 main.go:141] libmachine: (addons-875867)   </cpu>
	I1117 15:57:11.192243   16907 main.go:141] libmachine: (addons-875867)   <os>
	I1117 15:57:11.192255   16907 main.go:141] libmachine: (addons-875867)     <type>hvm</type>
	I1117 15:57:11.192270   16907 main.go:141] libmachine: (addons-875867)     <boot dev='cdrom'/>
	I1117 15:57:11.192282   16907 main.go:141] libmachine: (addons-875867)     <boot dev='hd'/>
	I1117 15:57:11.192297   16907 main.go:141] libmachine: (addons-875867)     <bootmenu enable='no'/>
	I1117 15:57:11.192312   16907 main.go:141] libmachine: (addons-875867)   </os>
	I1117 15:57:11.192336   16907 main.go:141] libmachine: (addons-875867)   <devices>
	I1117 15:57:11.192355   16907 main.go:141] libmachine: (addons-875867)     <disk type='file' device='cdrom'>
	I1117 15:57:11.192372   16907 main.go:141] libmachine: (addons-875867)       <source file='/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/boot2docker.iso'/>
	I1117 15:57:11.192387   16907 main.go:141] libmachine: (addons-875867)       <target dev='hdc' bus='scsi'/>
	I1117 15:57:11.192400   16907 main.go:141] libmachine: (addons-875867)       <readonly/>
	I1117 15:57:11.192413   16907 main.go:141] libmachine: (addons-875867)     </disk>
	I1117 15:57:11.192429   16907 main.go:141] libmachine: (addons-875867)     <disk type='file' device='disk'>
	I1117 15:57:11.192447   16907 main.go:141] libmachine: (addons-875867)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1117 15:57:11.192466   16907 main.go:141] libmachine: (addons-875867)       <source file='/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/addons-875867.rawdisk'/>
	I1117 15:57:11.192478   16907 main.go:141] libmachine: (addons-875867)       <target dev='hda' bus='virtio'/>
	I1117 15:57:11.192489   16907 main.go:141] libmachine: (addons-875867)     </disk>
	I1117 15:57:11.192502   16907 main.go:141] libmachine: (addons-875867)     <interface type='network'>
	I1117 15:57:11.192540   16907 main.go:141] libmachine: (addons-875867)       <source network='mk-addons-875867'/>
	I1117 15:57:11.192564   16907 main.go:141] libmachine: (addons-875867)       <model type='virtio'/>
	I1117 15:57:11.192595   16907 main.go:141] libmachine: (addons-875867)     </interface>
	I1117 15:57:11.192629   16907 main.go:141] libmachine: (addons-875867)     <interface type='network'>
	I1117 15:57:11.192642   16907 main.go:141] libmachine: (addons-875867)       <source network='default'/>
	I1117 15:57:11.192651   16907 main.go:141] libmachine: (addons-875867)       <model type='virtio'/>
	I1117 15:57:11.192666   16907 main.go:141] libmachine: (addons-875867)     </interface>
	I1117 15:57:11.192682   16907 main.go:141] libmachine: (addons-875867)     <serial type='pty'>
	I1117 15:57:11.192699   16907 main.go:141] libmachine: (addons-875867)       <target port='0'/>
	I1117 15:57:11.192712   16907 main.go:141] libmachine: (addons-875867)     </serial>
	I1117 15:57:11.192724   16907 main.go:141] libmachine: (addons-875867)     <console type='pty'>
	I1117 15:57:11.192736   16907 main.go:141] libmachine: (addons-875867)       <target type='serial' port='0'/>
	I1117 15:57:11.192746   16907 main.go:141] libmachine: (addons-875867)     </console>
	I1117 15:57:11.192755   16907 main.go:141] libmachine: (addons-875867)     <rng model='virtio'>
	I1117 15:57:11.192770   16907 main.go:141] libmachine: (addons-875867)       <backend model='random'>/dev/random</backend>
	I1117 15:57:11.192782   16907 main.go:141] libmachine: (addons-875867)     </rng>
	I1117 15:57:11.192802   16907 main.go:141] libmachine: (addons-875867)     
	I1117 15:57:11.192816   16907 main.go:141] libmachine: (addons-875867)     
	I1117 15:57:11.192827   16907 main.go:141] libmachine: (addons-875867)   </devices>
	I1117 15:57:11.192843   16907 main.go:141] libmachine: (addons-875867) </domain>
	I1117 15:57:11.192859   16907 main.go:141] libmachine: (addons-875867) 
	I1117 15:57:11.199045   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:94:44:d6 in network default
	I1117 15:57:11.199568   16907 main.go:141] libmachine: (addons-875867) Ensuring networks are active...
	I1117 15:57:11.199590   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:11.200208   16907 main.go:141] libmachine: (addons-875867) Ensuring network default is active
	I1117 15:57:11.200495   16907 main.go:141] libmachine: (addons-875867) Ensuring network mk-addons-875867 is active
	I1117 15:57:11.201285   16907 main.go:141] libmachine: (addons-875867) Getting domain xml...
	I1117 15:57:11.202004   16907 main.go:141] libmachine: (addons-875867) Creating domain...
	I1117 15:57:12.644310   16907 main.go:141] libmachine: (addons-875867) Waiting to get IP...
	I1117 15:57:12.645133   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:12.645557   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:12.645624   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:12.645542   16928 retry.go:31] will retry after 211.136434ms: waiting for machine to come up
	I1117 15:57:12.857953   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:12.858426   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:12.858461   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:12.858386   16928 retry.go:31] will retry after 268.868422ms: waiting for machine to come up
	I1117 15:57:13.129013   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:13.129502   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:13.129535   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:13.129454   16928 retry.go:31] will retry after 389.542641ms: waiting for machine to come up
	I1117 15:57:13.520706   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:13.521104   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:13.521134   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:13.521059   16928 retry.go:31] will retry after 409.734079ms: waiting for machine to come up
	I1117 15:57:13.932651   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:13.933028   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:13.933049   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:13.932984   16928 retry.go:31] will retry after 510.912991ms: waiting for machine to come up
	I1117 15:57:14.445668   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:14.446109   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:14.446153   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:14.446074   16928 retry.go:31] will retry after 793.690828ms: waiting for machine to come up
	I1117 15:57:15.241510   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:15.241931   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:15.241964   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:15.241883   16928 retry.go:31] will retry after 806.450447ms: waiting for machine to come up
	I1117 15:57:16.050040   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:16.050483   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:16.050513   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:16.050426   16928 retry.go:31] will retry after 995.947466ms: waiting for machine to come up
	I1117 15:57:17.047583   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:17.047966   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:17.047997   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:17.047928   16928 retry.go:31] will retry after 1.80637689s: waiting for machine to come up
	I1117 15:57:18.856017   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:18.856424   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:18.856443   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:18.856376   16928 retry.go:31] will retry after 1.624837459s: waiting for machine to come up
	I1117 15:57:20.483259   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:20.483697   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:20.483730   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:20.483633   16928 retry.go:31] will retry after 2.24073629s: waiting for machine to come up
	I1117 15:57:22.727042   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:22.727420   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:22.727448   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:22.727379   16928 retry.go:31] will retry after 2.832817966s: waiting for machine to come up
	I1117 15:57:25.561546   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:25.561927   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:25.561948   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:25.561901   16928 retry.go:31] will retry after 4.001157685s: waiting for machine to come up
	I1117 15:57:29.568259   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:29.568736   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find current IP address of domain addons-875867 in network mk-addons-875867
	I1117 15:57:29.568761   16907 main.go:141] libmachine: (addons-875867) DBG | I1117 15:57:29.568697   16928 retry.go:31] will retry after 4.927075746s: waiting for machine to come up
	I1117 15:57:34.500486   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.500965   16907 main.go:141] libmachine: (addons-875867) Found IP for machine: 192.168.39.118
	I1117 15:57:34.500989   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has current primary IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.500998   16907 main.go:141] libmachine: (addons-875867) Reserving static IP address...
	I1117 15:57:34.501384   16907 main.go:141] libmachine: (addons-875867) DBG | unable to find host DHCP lease matching {name: "addons-875867", mac: "52:54:00:a8:45:2c", ip: "192.168.39.118"} in network mk-addons-875867
	I1117 15:57:34.575401   16907 main.go:141] libmachine: (addons-875867) DBG | Getting to WaitForSSH function...
	I1117 15:57:34.575436   16907 main.go:141] libmachine: (addons-875867) Reserved static IP address: 192.168.39.118
	I1117 15:57:34.575449   16907 main.go:141] libmachine: (addons-875867) Waiting for SSH to be available...
	I1117 15:57:34.578023   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.578367   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:34.578396   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.578730   16907 main.go:141] libmachine: (addons-875867) DBG | Using SSH client type: external
	I1117 15:57:34.578788   16907 main.go:141] libmachine: (addons-875867) DBG | Using SSH private key: /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa (-rw-------)
	I1117 15:57:34.578832   16907 main.go:141] libmachine: (addons-875867) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1117 15:57:34.578852   16907 main.go:141] libmachine: (addons-875867) DBG | About to run SSH command:
	I1117 15:57:34.578901   16907 main.go:141] libmachine: (addons-875867) DBG | exit 0
	I1117 15:57:34.670728   16907 main.go:141] libmachine: (addons-875867) DBG | SSH cmd err, output: <nil>: 
	I1117 15:57:34.671112   16907 main.go:141] libmachine: (addons-875867) KVM machine creation complete!
	I1117 15:57:34.671362   16907 main.go:141] libmachine: (addons-875867) Calling .GetConfigRaw
	I1117 15:57:34.671924   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:34.672139   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:34.672368   16907 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1117 15:57:34.672387   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:57:34.673598   16907 main.go:141] libmachine: Detecting operating system of created instance...
	I1117 15:57:34.673614   16907 main.go:141] libmachine: Waiting for SSH to be available...
	I1117 15:57:34.673622   16907 main.go:141] libmachine: Getting to WaitForSSH function...
	I1117 15:57:34.673640   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:34.675919   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.676302   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:34.676318   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.676447   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:34.676633   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.676861   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.677036   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:34.677249   16907 main.go:141] libmachine: Using SSH client type: native
	I1117 15:57:34.677634   16907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1117 15:57:34.677648   16907 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1117 15:57:34.790011   16907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1117 15:57:34.790073   16907 main.go:141] libmachine: Detecting the provisioner...
	I1117 15:57:34.790088   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:34.792560   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.792955   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:34.792979   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.793191   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:34.793386   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.793546   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.793678   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:34.793853   16907 main.go:141] libmachine: Using SSH client type: native
	I1117 15:57:34.794212   16907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1117 15:57:34.794226   16907 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1117 15:57:34.907429   16907 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1117 15:57:34.907513   16907 main.go:141] libmachine: found compatible host: buildroot
	I1117 15:57:34.907528   16907 main.go:141] libmachine: Provisioning with buildroot...
	I1117 15:57:34.907540   16907 main.go:141] libmachine: (addons-875867) Calling .GetMachineName
	I1117 15:57:34.907804   16907 buildroot.go:166] provisioning hostname "addons-875867"
	I1117 15:57:34.907827   16907 main.go:141] libmachine: (addons-875867) Calling .GetMachineName
	I1117 15:57:34.908040   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:34.910847   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.911204   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:34.911232   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:34.911406   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:34.911577   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.911689   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:34.911810   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:34.911941   16907 main.go:141] libmachine: Using SSH client type: native
	I1117 15:57:34.912247   16907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1117 15:57:34.912260   16907 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-875867 && echo "addons-875867" | sudo tee /etc/hostname
	I1117 15:57:35.040056   16907 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-875867
	
	I1117 15:57:35.040103   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.042780   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.043045   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.043076   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.043262   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.043463   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.043611   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.043726   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.043866   16907 main.go:141] libmachine: Using SSH client type: native
	I1117 15:57:35.044209   16907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1117 15:57:35.044233   16907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-875867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-875867/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-875867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1117 15:57:35.172021   16907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1117 15:57:35.172065   16907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17634-9289/.minikube CaCertPath:/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17634-9289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17634-9289/.minikube}
	I1117 15:57:35.172093   16907 buildroot.go:174] setting up certificates
	I1117 15:57:35.172111   16907 provision.go:83] configureAuth start
	I1117 15:57:35.172129   16907 main.go:141] libmachine: (addons-875867) Calling .GetMachineName
	I1117 15:57:35.172431   16907 main.go:141] libmachine: (addons-875867) Calling .GetIP
	I1117 15:57:35.175243   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.175594   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.175625   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.175786   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.177697   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.178008   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.178033   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.178151   16907 provision.go:138] copyHostCerts
	I1117 15:57:35.178209   16907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17634-9289/.minikube/ca.pem (1082 bytes)
	I1117 15:57:35.178333   16907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17634-9289/.minikube/cert.pem (1123 bytes)
	I1117 15:57:35.178424   16907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17634-9289/.minikube/key.pem (1679 bytes)
	I1117 15:57:35.178482   16907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17634-9289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca-key.pem org=jenkins.addons-875867 san=[192.168.39.118 192.168.39.118 localhost 127.0.0.1 minikube addons-875867]
	I1117 15:57:35.257122   16907 provision.go:172] copyRemoteCerts
	I1117 15:57:35.257175   16907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1117 15:57:35.257196   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.259693   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.260157   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.260189   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.260394   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.260581   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.260785   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.260926   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:57:35.348052   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1117 15:57:35.371825   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1117 15:57:35.395722   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1117 15:57:35.419853   16907 provision.go:86] duration metric: configureAuth took 247.718404ms
	I1117 15:57:35.419880   16907 buildroot.go:189] setting minikube options for container-runtime
	I1117 15:57:35.420084   16907 config.go:182] Loaded profile config "addons-875867": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 15:57:35.420107   16907 main.go:141] libmachine: Checking connection to Docker...
	I1117 15:57:35.420122   16907 main.go:141] libmachine: (addons-875867) Calling .GetURL
	I1117 15:57:35.421168   16907 main.go:141] libmachine: (addons-875867) DBG | Using libvirt version 6000000
	I1117 15:57:35.423214   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.423535   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.423560   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.423732   16907 main.go:141] libmachine: Docker is up and running!
	I1117 15:57:35.423744   16907 main.go:141] libmachine: Reticulating splines...
	I1117 15:57:35.423750   16907 client.go:171] LocalClient.Create took 25.228845572s
	I1117 15:57:35.423773   16907 start.go:167] duration metric: libmachine.API.Create for "addons-875867" took 25.228915117s
	I1117 15:57:35.423787   16907 start.go:300] post-start starting for "addons-875867" (driver="kvm2")
	I1117 15:57:35.423801   16907 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1117 15:57:35.423825   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:35.424068   16907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1117 15:57:35.424098   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.426425   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.426811   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.426841   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.427098   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.427302   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.427498   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.427656   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:57:35.512923   16907 ssh_runner.go:195] Run: cat /etc/os-release
	I1117 15:57:35.517497   16907 info.go:137] Remote host: Buildroot 2021.02.12
	I1117 15:57:35.517537   16907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9289/.minikube/addons for local assets ...
	I1117 15:57:35.517634   16907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9289/.minikube/files for local assets ...
	I1117 15:57:35.517666   16907 start.go:303] post-start completed in 93.869291ms
	I1117 15:57:35.517707   16907 main.go:141] libmachine: (addons-875867) Calling .GetConfigRaw
	I1117 15:57:35.518268   16907 main.go:141] libmachine: (addons-875867) Calling .GetIP
	I1117 15:57:35.520606   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.521035   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.521068   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.521279   16907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/config.json ...
	I1117 15:57:35.521465   16907 start.go:128] duration metric: createHost completed in 25.34485071s
	I1117 15:57:35.521488   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.523475   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.523780   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.523806   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.523948   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.524129   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.524286   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.524422   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.524591   16907 main.go:141] libmachine: Using SSH client type: native
	I1117 15:57:35.524992   16907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1117 15:57:35.525005   16907 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1117 15:57:35.639346   16907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1700236655.618985138
	
	I1117 15:57:35.639371   16907 fix.go:206] guest clock: 1700236655.618985138
	I1117 15:57:35.639381   16907 fix.go:219] Guest: 2023-11-17 15:57:35.618985138 +0000 UTC Remote: 2023-11-17 15:57:35.521477456 +0000 UTC m=+25.464134080 (delta=97.507682ms)
	I1117 15:57:35.639405   16907 fix.go:190] guest clock delta is within tolerance: 97.507682ms
	I1117 15:57:35.639411   16907 start.go:83] releasing machines lock for "addons-875867", held for 25.462893991s
	I1117 15:57:35.639437   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:35.639703   16907 main.go:141] libmachine: (addons-875867) Calling .GetIP
	I1117 15:57:35.642138   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.642483   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.642517   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.642612   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:35.643094   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:35.643279   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:57:35.643386   16907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1117 15:57:35.643439   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.643497   16907 ssh_runner.go:195] Run: cat /version.json
	I1117 15:57:35.643527   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:57:35.645908   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.645937   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.646270   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.646312   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:35.646339   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.646356   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:35.646509   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.646616   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:57:35.646697   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.646772   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:57:35.646853   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.646913   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:57:35.646976   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:57:35.647020   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:57:35.728578   16907 ssh_runner.go:195] Run: systemctl --version
	I1117 15:57:35.757749   16907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1117 15:57:35.763646   16907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1117 15:57:35.763724   16907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1117 15:57:35.779978   16907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1117 15:57:35.780001   16907 start.go:472] detecting cgroup driver to use...
	I1117 15:57:35.780064   16907 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1117 15:57:36.026366   16907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1117 15:57:36.040730   16907 docker.go:203] disabling cri-docker service (if available) ...
	I1117 15:57:36.040795   16907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1117 15:57:36.055448   16907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1117 15:57:36.069910   16907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1117 15:57:36.190723   16907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1117 15:57:36.313177   16907 docker.go:219] disabling docker service ...
	I1117 15:57:36.313255   16907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1117 15:57:36.327108   16907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1117 15:57:36.339949   16907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1117 15:57:36.447850   16907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1117 15:57:36.553106   16907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1117 15:57:36.567017   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 15:57:36.585453   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1117 15:57:36.595613   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1117 15:57:36.605931   16907 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1117 15:57:36.606002   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1117 15:57:36.615922   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 15:57:36.625911   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1117 15:57:36.635617   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 15:57:36.645812   16907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1117 15:57:36.656282   16907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1117 15:57:36.666366   16907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1117 15:57:36.675035   16907 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1117 15:57:36.675103   16907 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1117 15:57:36.689718   16907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1117 15:57:36.698842   16907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 15:57:36.799679   16907 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1117 15:57:36.831471   16907 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1117 15:57:36.831566   16907 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1117 15:57:36.838275   16907 retry.go:31] will retry after 782.36847ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1117 15:57:37.620903   16907 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1117 15:57:37.626481   16907 start.go:540] Will wait 60s for crictl version
	I1117 15:57:37.626577   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:37.630516   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1117 15:57:37.666412   16907 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.9
	RuntimeApiVersion:  v1
	I1117 15:57:37.666504   16907 ssh_runner.go:195] Run: containerd --version
	I1117 15:57:37.695776   16907 ssh_runner.go:195] Run: containerd --version
	I1117 15:57:37.729481   16907 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.7.9 ...
	I1117 15:57:37.730985   16907 main.go:141] libmachine: (addons-875867) Calling .GetIP
	I1117 15:57:37.733639   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:37.733972   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:57:37.733993   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:57:37.734242   16907 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1117 15:57:37.738361   16907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1117 15:57:37.751725   16907 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1117 15:57:37.751784   16907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1117 15:57:37.786796   16907 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1117 15:57:37.786870   16907 ssh_runner.go:195] Run: which lz4
	I1117 15:57:37.790662   16907 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1117 15:57:37.794699   16907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1117 15:57:37.794738   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457263989 bytes)
	I1117 15:57:39.606176   16907 containerd.go:547] Took 1.815558 seconds to copy over tarball
	I1117 15:57:39.606251   16907 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1117 15:57:42.884570   16907 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.278291765s)
	I1117 15:57:42.884598   16907 containerd.go:554] Took 3.278398 seconds to extract the tarball
	I1117 15:57:42.884609   16907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1117 15:57:42.926508   16907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 15:57:43.029638   16907 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1117 15:57:43.055753   16907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1117 15:57:43.096020   16907 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1117 15:57:43.096140   16907 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 15:57:43.096201   16907 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1117 15:57:43.096222   16907 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1117 15:57:43.096233   16907 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1117 15:57:43.096287   16907 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1117 15:57:43.096212   16907 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1117 15:57:43.096389   16907 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1117 15:57:43.096413   16907 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1117 15:57:43.097514   16907 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1117 15:57:43.097605   16907 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1117 15:57:43.097604   16907 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1117 15:57:43.097602   16907 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1117 15:57:43.097927   16907 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1117 15:57:43.097942   16907 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1117 15:57:43.097946   16907 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1117 15:57:43.097927   16907 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 15:57:43.265210   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.3"
	I1117 15:57:43.274451   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	I1117 15:57:43.277408   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0"
	I1117 15:57:43.282606   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.3"
	I1117 15:57:43.295370   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I1117 15:57:43.299521   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.3"
	I1117 15:57:43.318164   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.3"
	I1117 15:57:43.415925   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1117 15:57:44.046860   16907 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1117 15:57:44.046901   16907 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1117 15:57:44.046953   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.394120   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9": (1.11962281s)
	I1117 15:57:44.394139   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0": (1.11670217s)
	I1117 15:57:44.394176   16907 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1117 15:57:44.394206   16907 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1117 15:57:44.394211   16907 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1117 15:57:44.394239   16907 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1117 15:57:44.394271   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.394284   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.923680   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.3": (1.641037163s)
	I1117 15:57:44.923716   16907 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1117 15:57:44.923753   16907 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1117 15:57:44.923793   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.977404   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1": (1.681992824s)
	I1117 15:57:44.977455   16907 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1117 15:57:44.977481   16907 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1117 15:57:44.977494   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.3": (1.65930474s)
	I1117 15:57:44.977518   16907 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1117 15:57:44.977546   16907 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1117 15:57:44.977574   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.977528   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.977578   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5": (1.561629214s)
	I1117 15:57:44.977615   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1117 15:57:44.977631   16907 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1117 15:57:44.977456   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.3": (1.677908123s)
	I1117 15:57:44.977662   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1117 15:57:44.977671   16907 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 15:57:44.977671   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1117 15:57:44.977675   16907 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1117 15:57:44.977692   16907 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1117 15:57:44.977697   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.977722   16907 ssh_runner.go:195] Run: which crictl
	I1117 15:57:44.977747   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1117 15:57:45.386615   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1117 15:57:45.386628   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1117 15:57:45.386692   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1117 15:57:45.386756   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1117 15:57:45.389853   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1117 15:57:45.389915   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1117 15:57:45.389923   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1117 15:57:45.389931   16907 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 15:57:45.639046   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1117 15:57:45.639102   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1117 15:57:45.639142   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1117 15:57:45.641128   16907 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1117 15:57:45.641170   16907 cache_images.go:92] LoadImages completed in 2.545105159s
	W1117 15:57:45.641230   16907 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3: no such file or directory
	I1117 15:57:45.641295   16907 ssh_runner.go:195] Run: sudo crictl info
	I1117 15:57:45.681439   16907 cni.go:84] Creating CNI manager for ""
	I1117 15:57:45.681464   16907 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1117 15:57:45.681482   16907 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1117 15:57:45.681500   16907 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-875867 NodeName:addons-875867 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1117 15:57:45.681626   16907 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-875867"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1117 15:57:45.681696   16907 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-875867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-875867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1117 15:57:45.681745   16907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1117 15:57:45.691239   16907 binaries.go:44] Found k8s binaries, skipping transfer
	I1117 15:57:45.691307   16907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1117 15:57:45.700374   16907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1117 15:57:45.717262   16907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1117 15:57:45.733864   16907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1117 15:57:45.750624   16907 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I1117 15:57:45.754618   16907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1117 15:57:45.767671   16907 certs.go:56] Setting up /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867 for IP: 192.168.39.118
	I1117 15:57:45.767712   16907 certs.go:190] acquiring lock for shared ca certs: {Name:mk6dbc936cb3414ae17aa1ee2ae8618428c5dc59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:45.767873   16907 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17634-9289/.minikube/ca.key
	I1117 15:57:45.846582   16907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9289/.minikube/ca.crt ...
	I1117 15:57:45.846614   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/ca.crt: {Name:mk57186525946c03f47d0a3a41e22745aeaaf8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:45.846787   16907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9289/.minikube/ca.key ...
	I1117 15:57:45.846797   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/ca.key: {Name:mk3e9bc2edd987ec1ec51ab1cca7c0c4844b1c55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:45.846860   16907 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.key
	I1117 15:57:45.966230   16907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.crt ...
	I1117 15:57:45.966258   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.crt: {Name:mk0ffd06b01a432b6123f1dbe5cbad4990f3036e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:45.966412   16907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.key ...
	I1117 15:57:45.966422   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.key: {Name:mk8e2092597eff0e5859f504f08704ec11df1f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:45.966536   16907 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.key
	I1117 15:57:45.966550   16907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt with IP's: []
	I1117 15:57:46.161179   16907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt ...
	I1117 15:57:46.161211   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: {Name:mk9bb1dc6e6c6a72c62a366479db1c7df85a4cc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.161373   16907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.key ...
	I1117 15:57:46.161383   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.key: {Name:mk99b2c01420552e1650c2b93add601f3e57e2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.161451   16907 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key.ee260ba9
	I1117 15:57:46.161468   16907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt.ee260ba9 with IP's: [192.168.39.118 10.96.0.1 127.0.0.1 10.0.0.1]
	I1117 15:57:46.365359   16907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt.ee260ba9 ...
	I1117 15:57:46.365390   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt.ee260ba9: {Name:mkea9df2273d98a00fa4f825d2f94ae1edf29513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.365548   16907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key.ee260ba9 ...
	I1117 15:57:46.365563   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key.ee260ba9: {Name:mk9e3d9bb5af3ecebca320aa20a18ff919b856d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.365635   16907 certs.go:337] copying /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt.ee260ba9 -> /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt
	I1117 15:57:46.365698   16907 certs.go:341] copying /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key.ee260ba9 -> /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key
	I1117 15:57:46.365750   16907 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.key
	I1117 15:57:46.365766   16907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.crt with IP's: []
	I1117 15:57:46.490777   16907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.crt ...
	I1117 15:57:46.490806   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.crt: {Name:mke7e6f5a2bd2925ef77c5f99f044a93350da263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.490953   16907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.key ...
	I1117 15:57:46.490963   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.key: {Name:mk4af81e4e9c78e1bec2bbd630f686e56f4935b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:46.491121   16907 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca-key.pem (1675 bytes)
	I1117 15:57:46.491154   16907 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/home/jenkins/minikube-integration/17634-9289/.minikube/certs/ca.pem (1082 bytes)
	I1117 15:57:46.491178   16907 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/home/jenkins/minikube-integration/17634-9289/.minikube/certs/cert.pem (1123 bytes)
	I1117 15:57:46.491202   16907 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9289/.minikube/certs/home/jenkins/minikube-integration/17634-9289/.minikube/certs/key.pem (1679 bytes)
	I1117 15:57:46.491796   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1117 15:57:46.517076   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1117 15:57:46.541880   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1117 15:57:46.566032   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1117 15:57:46.590468   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1117 15:57:46.614762   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1117 15:57:46.639119   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1117 15:57:46.662605   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1117 15:57:46.686830   16907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1117 15:57:46.710261   16907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1117 15:57:46.726732   16907 ssh_runner.go:195] Run: openssl version
	I1117 15:57:46.732463   16907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1117 15:57:46.742789   16907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1117 15:57:46.747610   16907 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 17 15:57 /usr/share/ca-certificates/minikubeCA.pem
	I1117 15:57:46.747675   16907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1117 15:57:46.753219   16907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1117 15:57:46.763516   16907 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1117 15:57:46.768160   16907 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1117 15:57:46.768217   16907 kubeadm.go:404] StartCluster: {Name:addons-875867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-875867 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:57:46.768320   16907 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1117 15:57:46.768370   16907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1117 15:57:46.811991   16907 cri.go:89] found id: ""
	I1117 15:57:46.812078   16907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1117 15:57:46.821404   16907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1117 15:57:46.830773   16907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1117 15:57:46.839940   16907 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1117 15:57:46.839995   16907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1117 15:57:47.033165   16907 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1117 15:58:13.969349   16907 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1117 15:58:13.969425   16907 kubeadm.go:322] [preflight] Running pre-flight checks
	I1117 15:58:13.969516   16907 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1117 15:58:13.969602   16907 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1117 15:58:13.969681   16907 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1117 15:58:13.969734   16907 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1117 15:58:13.971430   16907 out.go:204]   - Generating certificates and keys ...
	I1117 15:58:13.971522   16907 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1117 15:58:13.971634   16907 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1117 15:58:13.971731   16907 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1117 15:58:13.971807   16907 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1117 15:58:13.971898   16907 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1117 15:58:13.971963   16907 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1117 15:58:13.972037   16907 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1117 15:58:13.972188   16907 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-875867 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1117 15:58:13.972236   16907 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1117 15:58:13.972405   16907 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-875867 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1117 15:58:13.972505   16907 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1117 15:58:13.972559   16907 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1117 15:58:13.972603   16907 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1117 15:58:13.972648   16907 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1117 15:58:13.972718   16907 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1117 15:58:13.972782   16907 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1117 15:58:13.972859   16907 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1117 15:58:13.972929   16907 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1117 15:58:13.973055   16907 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1117 15:58:13.973154   16907 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1117 15:58:13.974687   16907 out.go:204]   - Booting up control plane ...
	I1117 15:58:13.974803   16907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1117 15:58:13.974892   16907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1117 15:58:13.974977   16907 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1117 15:58:13.975087   16907 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1117 15:58:13.975160   16907 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1117 15:58:13.975214   16907 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1117 15:58:13.975376   16907 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1117 15:58:13.975475   16907 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006753 seconds
	I1117 15:58:13.975619   16907 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1117 15:58:13.975776   16907 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1117 15:58:13.975840   16907 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1117 15:58:13.976049   16907 kubeadm.go:322] [mark-control-plane] Marking the node addons-875867 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1117 15:58:13.976126   16907 kubeadm.go:322] [bootstrap-token] Using token: 9o8gut.to0ycz9owtyagwb1
	I1117 15:58:13.977981   16907 out.go:204]   - Configuring RBAC rules ...
	I1117 15:58:13.978090   16907 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1117 15:58:13.978197   16907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1117 15:58:13.978366   16907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1117 15:58:13.978505   16907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1117 15:58:13.978671   16907 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1117 15:58:13.978778   16907 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1117 15:58:13.978934   16907 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1117 15:58:13.979013   16907 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1117 15:58:13.979060   16907 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1117 15:58:13.979066   16907 kubeadm.go:322] 
	I1117 15:58:13.979113   16907 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1117 15:58:13.979120   16907 kubeadm.go:322] 
	I1117 15:58:13.979186   16907 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1117 15:58:13.979192   16907 kubeadm.go:322] 
	I1117 15:58:13.979212   16907 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1117 15:58:13.979282   16907 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1117 15:58:13.979355   16907 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1117 15:58:13.979371   16907 kubeadm.go:322] 
	I1117 15:58:13.979445   16907 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1117 15:58:13.979454   16907 kubeadm.go:322] 
	I1117 15:58:13.979542   16907 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1117 15:58:13.979564   16907 kubeadm.go:322] 
	I1117 15:58:13.979610   16907 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1117 15:58:13.979671   16907 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1117 15:58:13.979724   16907 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1117 15:58:13.979730   16907 kubeadm.go:322] 
	I1117 15:58:13.979815   16907 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1117 15:58:13.979922   16907 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1117 15:58:13.979938   16907 kubeadm.go:322] 
	I1117 15:58:13.980042   16907 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9o8gut.to0ycz9owtyagwb1 \
	I1117 15:58:13.980163   16907 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1dcb5f490cd26bb92cf66b4015490e4b75af2e25c0ba2ddd8dbbb0240c723000 \
	I1117 15:58:13.980186   16907 kubeadm.go:322] 	--control-plane 
	I1117 15:58:13.980192   16907 kubeadm.go:322] 
	I1117 15:58:13.980282   16907 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1117 15:58:13.980295   16907 kubeadm.go:322] 
	I1117 15:58:13.980376   16907 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9o8gut.to0ycz9owtyagwb1 \
	I1117 15:58:13.980475   16907 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1dcb5f490cd26bb92cf66b4015490e4b75af2e25c0ba2ddd8dbbb0240c723000 
	I1117 15:58:13.980485   16907 cni.go:84] Creating CNI manager for ""
	I1117 15:58:13.980491   16907 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1117 15:58:13.982222   16907 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1117 15:58:13.983677   16907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1117 15:58:14.001475   16907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1117 15:58:14.064625   16907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1117 15:58:14.064692   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49db7ae766960f8f9e07cffcbe974581755c3ae6 minikube.k8s.io/name=addons-875867 minikube.k8s.io/updated_at=2023_11_17T15_58_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:14.064695   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:14.308346   16907 ops.go:34] apiserver oom_adj: -16
	I1117 15:58:14.308391   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:14.438623   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:15.032560   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:15.532004   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:16.032059   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:16.532990   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:17.032016   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:17.532916   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:18.032965   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:18.532470   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:19.032273   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:19.532627   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:20.032151   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:20.532066   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:21.032109   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:21.532693   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:22.032017   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:22.532950   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:23.032445   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:23.532733   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:24.032916   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:24.532541   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:25.032313   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:25.532781   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:26.032611   16907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 15:58:26.174857   16907 kubeadm.go:1081] duration metric: took 12.110225635s to wait for elevateKubeSystemPrivileges.
	I1117 15:58:26.174889   16907 kubeadm.go:406] StartCluster complete in 39.406677731s
	I1117 15:58:26.174905   16907 settings.go:142] acquiring lock: {Name:mk3e26d7ecffac43b0ca3c7cf22145abce70435b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:58:26.175037   16907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 15:58:26.175445   16907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/kubeconfig: {Name:mk4b2e72c7b23c94451a67757fba5120936b29f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:58:26.175669   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1117 15:58:26.175749   16907 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1117 15:58:26.175870   16907 addons.go:69] Setting default-storageclass=true in profile "addons-875867"
	I1117 15:58:26.175885   16907 addons.go:69] Setting metrics-server=true in profile "addons-875867"
	I1117 15:58:26.175893   16907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-875867"
	I1117 15:58:26.175893   16907 addons.go:69] Setting inspektor-gadget=true in profile "addons-875867"
	I1117 15:58:26.175900   16907 config.go:182] Loaded profile config "addons-875867": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 15:58:26.175913   16907 addons.go:69] Setting helm-tiller=true in profile "addons-875867"
	I1117 15:58:26.175926   16907 addons.go:69] Setting ingress=true in profile "addons-875867"
	I1117 15:58:26.175932   16907 addons.go:69] Setting gcp-auth=true in profile "addons-875867"
	I1117 15:58:26.175945   16907 addons.go:231] Setting addon ingress=true in "addons-875867"
	I1117 15:58:26.175948   16907 addons.go:69] Setting ingress-dns=true in profile "addons-875867"
	I1117 15:58:26.175953   16907 addons.go:69] Setting registry=true in profile "addons-875867"
	I1117 15:58:26.175959   16907 addons.go:231] Setting addon ingress-dns=true in "addons-875867"
	I1117 15:58:26.175966   16907 mustload.go:65] Loading cluster: addons-875867
	I1117 15:58:26.175975   16907 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-875867"
	I1117 15:58:26.176005   16907 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-875867"
	I1117 15:58:26.176010   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176010   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.175997   16907 addons.go:69] Setting cloud-spanner=true in profile "addons-875867"
	I1117 15:58:26.176033   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176048   16907 addons.go:231] Setting addon cloud-spanner=true in "addons-875867"
	I1117 15:58:26.176094   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176175   16907 config.go:182] Loaded profile config "addons-875867": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 15:58:26.175871   16907 addons.go:69] Setting volumesnapshots=true in profile "addons-875867"
	I1117 15:58:26.176390   16907 addons.go:231] Setting addon volumesnapshots=true in "addons-875867"
	I1117 15:58:26.176394   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176416   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176419   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176426   16907 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-875867"
	I1117 15:58:26.176428   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176421   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176451   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176457   16907 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-875867"
	I1117 15:58:26.176476   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176481   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.175943   16907 addons.go:231] Setting addon helm-tiller=true in "addons-875867"
	I1117 15:58:26.176497   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176513   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176520   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.175967   16907 addons.go:231] Setting addon registry=true in "addons-875867"
	I1117 15:58:26.176530   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.175918   16907 addons.go:231] Setting addon inspektor-gadget=true in "addons-875867"
	I1117 15:58:26.176574   16907 addons.go:69] Setting storage-provisioner=true in profile "addons-875867"
	I1117 15:58:26.176585   16907 addons.go:231] Setting addon storage-provisioner=true in "addons-875867"
	I1117 15:58:26.176601   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176619   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.176750   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176769   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176822   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176834   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176837   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176856   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176886   16907 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-875867"
	I1117 15:58:26.176898   16907 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-875867"
	I1117 15:58:26.176902   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.176921   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.176991   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.177046   16907 addons.go:231] Setting addon metrics-server=true in "addons-875867"
	I1117 15:58:26.177068   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.177085   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.177089   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.177258   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.177938   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.177962   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.198814   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1117 15:58:26.198839   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I1117 15:58:26.198820   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I1117 15:58:26.198947   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I1117 15:58:26.199053   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.199080   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.199343   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I1117 15:58:26.199636   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.199648   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.200163   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.200233   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.200387   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.200606   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.201024   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.201111   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.201136   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.201152   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.201234   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.201279   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.201307   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.201520   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.202095   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.202126   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.202305   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.202469   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.202488   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.202623   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.202668   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.202809   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.202828   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.203487   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.203547   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.203593   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.203634   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.204020   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.204049   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.204511   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.204538   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.205019   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.205051   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.206769   16907 addons.go:231] Setting addon default-storageclass=true in "addons-875867"
	I1117 15:58:26.206809   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.207195   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.207216   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.222329   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I1117 15:58:26.223054   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.223619   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.223639   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.224081   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.224612   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.224648   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.233090   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I1117 15:58:26.233831   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.234594   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.234614   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.235338   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.235580   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.237023   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I1117 15:58:26.237445   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.238360   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.240960   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1117 15:58:26.238958   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.241803   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37329
	I1117 15:58:26.242789   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.242817   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1117 15:58:26.242839   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1117 15:58:26.241834   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I1117 15:58:26.242861   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.243231   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.243340   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.243386   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.243913   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.243948   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.244306   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.245913   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.246537   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.246612   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.249022   16907 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1117 15:58:26.247642   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.247676   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.247712   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I1117 15:58:26.247990   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.249956   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.250801   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.250802   16907 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1117 15:58:26.250822   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1117 15:58:26.250837   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.251484   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I1117 15:58:26.252922   16907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1117 15:58:26.251599   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.251625   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.251911   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.252217   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.253005   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.253212   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.253509   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.254441   16907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1117 15:58:26.254468   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.253697   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.254173   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.254270   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.253663   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.254659   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.254978   16907 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-875867" context rescaled to 1 replicas
	I1117 15:58:26.255358   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I1117 15:58:26.256065   16907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1117 15:58:26.256223   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.258100   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.256239   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.256611   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.256876   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.256919   16907 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1117 15:58:26.259551   16907 out.go:177] * Verifying Kubernetes components...
	I1117 15:58:26.258253   16907 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1117 15:58:26.257057   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.256971   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.258341   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.258691   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.258721   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.260908   16907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 15:58:26.260986   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1117 15:58:26.261005   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.261072   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.262702   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I1117 15:58:26.262761   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I1117 15:58:26.262794   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.262806   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.262819   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.263442   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.263475   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.263914   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.263951   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.264014   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.264037   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.264212   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.264404   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.264422   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.264779   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.265302   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.265333   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.265572   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.265602   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.266820   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.267327   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.267349   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.267701   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.267887   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.268029   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.268190   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1117 15:58:26.268323   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.268845   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.269329   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.269346   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.269685   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.270174   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.270191   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.270608   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.270804   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.271594   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.271822   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.273636   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.276047   16907 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1117 15:58:26.277562   16907 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1117 15:58:26.277580   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1117 15:58:26.277599   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.275959   16907 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-875867"
	I1117 15:58:26.277693   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:26.278131   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.278167   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.276009   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35661
	I1117 15:58:26.282005   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.282078   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I1117 15:58:26.282481   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.282589   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.282798   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.282991   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.283134   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.283267   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.283933   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.284506   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.284525   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.284894   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.285411   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.285443   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.286934   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.287483   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.287501   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.287850   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.288379   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.288411   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.288616   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38285
	I1117 15:58:26.288625   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I1117 15:58:26.289795   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.290354   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.290371   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.290771   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.290995   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.291981   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.292656   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.292682   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.292745   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.295133   16907 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1117 15:58:26.293244   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.296646   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I1117 15:58:26.296860   16907 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1117 15:58:26.296874   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1117 15:58:26.296892   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.297255   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.297332   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I1117 15:58:26.297715   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.297798   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.298064   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I1117 15:58:26.298252   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.298261   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.298539   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.298557   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.298789   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.298959   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.299177   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.299854   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.299880   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.299941   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.300140   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.300699   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.301176   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.301242   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.303513   16907 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1117 15:58:26.301754   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.301784   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.302022   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.302230   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.302620   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.305311   16907 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1117 15:58:26.305328   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1117 15:58:26.305352   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.305368   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.305415   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.311398   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1117 15:58:26.306059   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.309251   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I1117 15:58:26.309546   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.310456   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.311059   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I1117 15:58:26.312246   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I1117 15:58:26.314931   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1117 15:58:26.313052   16907 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1117 15:58:26.313113   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.313336   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.313429   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.313623   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.313628   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.313674   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.316528   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I1117 15:58:26.319173   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1117 15:58:26.317187   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.317348   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.317392   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.317546   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.317671   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.317733   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.317882   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.319977   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I1117 15:58:26.322442   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1117 15:58:26.321143   16907 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1117 15:58:26.321170   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.321201   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.321238   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.321393   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.321432   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.321980   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.323748   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.325062   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1117 15:58:26.323750   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1117 15:58:26.324169   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.324265   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.324596   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.324664   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.324892   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.327623   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1117 15:58:26.326337   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.326376   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.326509   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.326530   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.326653   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.326908   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:26.328930   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:26.330185   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1117 15:58:26.329559   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.331757   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.330406   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.332073   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.334513   16907 out.go:177]   - Using image docker.io/registry:2.8.3
	I1117 15:58:26.333451   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.336008   16907 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1117 15:58:26.333772   16907 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1117 15:58:26.334109   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.333500   16907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1117 15:58:26.334555   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.334833   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I1117 15:58:26.335536   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.337381   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1117 15:58:26.337389   16907 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1117 15:58:26.337398   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.337399   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1117 15:58:26.337414   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.339073   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1117 15:58:26.337456   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.338175   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.339103   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1117 15:58:26.339124   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.340672   16907 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1117 15:58:26.339319   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.341172   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.342105   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.342130   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.341901   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.342169   16907 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1117 15:58:26.341937   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.342186   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1117 15:58:26.342203   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.342342   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.342418   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.342444   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.342465   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.342570   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.342747   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.342771   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.342959   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.343222   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.343420   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.343703   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.343727   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.343731   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.343856   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.344118   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.344257   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.344436   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.345757   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.346305   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.346326   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.346546   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.346739   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.346832   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.346905   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.351981   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.352588   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.352609   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.352963   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.353143   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.355305   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.357381   16907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 15:58:26.359033   16907 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 15:58:26.359047   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1117 15:58:26.359065   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.362610   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.363121   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.363151   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.363324   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.363561   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.363721   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.363884   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.371836   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46017
	I1117 15:58:26.372245   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:26.372741   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:26.372764   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:26.373099   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:26.373282   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:26.374958   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:26.376715   16907 out.go:177]   - Using image docker.io/busybox:stable
	I1117 15:58:26.378247   16907 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1117 15:58:26.379745   16907 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1117 15:58:26.379765   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1117 15:58:26.379794   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:26.382856   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.383249   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:26.383291   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:26.383466   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:26.383669   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:26.383811   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:26.383939   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:26.517013   16907 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1117 15:58:26.517048   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1117 15:58:26.588200   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1117 15:58:26.644971   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1117 15:58:26.652633   16907 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1117 15:58:26.652662   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1117 15:58:26.673818   16907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1117 15:58:26.673840   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1117 15:58:26.836462   16907 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1117 15:58:26.836485   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1117 15:58:26.858479   16907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1117 15:58:26.859410   16907 node_ready.go:35] waiting up to 6m0s for node "addons-875867" to be "Ready" ...
	I1117 15:58:26.863126   16907 node_ready.go:49] node "addons-875867" has status "Ready":"True"
	I1117 15:58:26.863151   16907 node_ready.go:38] duration metric: took 3.703169ms waiting for node "addons-875867" to be "Ready" ...
	I1117 15:58:26.863170   16907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 15:58:26.869635   16907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:26.906248   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1117 15:58:26.914354   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1117 15:58:26.975789   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1117 15:58:27.054921   16907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1117 15:58:27.054978   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1117 15:58:27.055052   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 15:58:27.127632   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1117 15:58:27.203588   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1117 15:58:27.203619   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1117 15:58:27.270167   16907 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1117 15:58:27.270203   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1117 15:58:27.331090   16907 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1117 15:58:27.331119   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1117 15:58:27.365435   16907 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1117 15:58:27.365464   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1117 15:58:27.373058   16907 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1117 15:58:27.373086   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1117 15:58:27.403511   16907 pod_ready.go:92] pod "etcd-addons-875867" in "kube-system" namespace has status "Ready":"True"
	I1117 15:58:27.403554   16907 pod_ready.go:81] duration metric: took 533.892868ms waiting for pod "etcd-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.403567   16907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.409392   16907 pod_ready.go:92] pod "kube-apiserver-addons-875867" in "kube-system" namespace has status "Ready":"True"
	I1117 15:58:27.409421   16907 pod_ready.go:81] duration metric: took 5.845785ms waiting for pod "kube-apiserver-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.409442   16907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.415280   16907 pod_ready.go:92] pod "kube-controller-manager-addons-875867" in "kube-system" namespace has status "Ready":"True"
	I1117 15:58:27.415307   16907 pod_ready.go:81] duration metric: took 5.852662ms waiting for pod "kube-controller-manager-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.415320   16907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fdr6g" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:27.518062   16907 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1117 15:58:27.518089   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1117 15:58:27.583245   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1117 15:58:27.583268   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1117 15:58:27.596864   16907 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1117 15:58:27.596891   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1117 15:58:27.749304   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1117 15:58:27.772087   16907 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1117 15:58:27.772111   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1117 15:58:27.852429   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1117 15:58:27.925492   16907 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1117 15:58:27.925514   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1117 15:58:27.931417   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1117 15:58:27.931447   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1117 15:58:28.219140   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1117 15:58:28.219171   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1117 15:58:28.251004   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1117 15:58:28.251029   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1117 15:58:28.269680   16907 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1117 15:58:28.269706   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1117 15:58:28.461642   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1117 15:58:28.468779   16907 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1117 15:58:28.468800   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1117 15:58:28.573691   16907 pod_ready.go:92] pod "kube-proxy-fdr6g" in "kube-system" namespace has status "Ready":"True"
	I1117 15:58:28.573716   16907 pod_ready.go:81] duration metric: took 1.158389134s waiting for pod "kube-proxy-fdr6g" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:28.573726   16907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:28.605991   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1117 15:58:28.694019   16907 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1117 15:58:28.694045   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1117 15:58:28.742177   16907 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1117 15:58:28.742209   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1117 15:58:28.864368   16907 pod_ready.go:92] pod "kube-scheduler-addons-875867" in "kube-system" namespace has status "Ready":"True"
	I1117 15:58:28.864391   16907 pod_ready.go:81] duration metric: took 290.656825ms waiting for pod "kube-scheduler-addons-875867" in "kube-system" namespace to be "Ready" ...
	I1117 15:58:28.864401   16907 pod_ready.go:38] duration metric: took 2.001203937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1117 15:58:28.864419   16907 api_server.go:52] waiting for apiserver process to appear ...
	I1117 15:58:28.864469   16907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 15:58:29.282522   16907 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1117 15:58:29.282551   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1117 15:58:29.324681   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1117 15:58:29.324705   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1117 15:58:29.950795   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1117 15:58:29.950818   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1117 15:58:30.010269   16907 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1117 15:58:30.010290   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1117 15:58:30.182834   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1117 15:58:30.182865   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1117 15:58:30.198965   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1117 15:58:30.377476   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1117 15:58:30.377506   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1117 15:58:30.547747   16907 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1117 15:58:30.547768   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1117 15:58:30.851225   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1117 15:58:32.261159   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.672913875s)
	I1117 15:58:32.261217   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:32.261233   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:32.261532   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:32.261555   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:32.261567   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:32.261578   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:32.261591   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:32.261988   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:32.262007   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:32.261991   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:32.958836   16907 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1117 15:58:32.958871   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:32.962818   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:32.963275   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:32.963308   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:32.963569   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:32.963799   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:32.963979   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:32.964106   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:33.458120   16907 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1117 15:58:34.601612   16907 addons.go:231] Setting addon gcp-auth=true in "addons-875867"
	I1117 15:58:34.601664   16907 host.go:66] Checking if "addons-875867" exists ...
	I1117 15:58:34.601976   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:34.602014   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:34.617431   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I1117 15:58:34.617850   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:34.618282   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:34.618306   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:34.618706   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:34.619171   16907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 15:58:34.619210   16907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 15:58:34.635097   16907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1117 15:58:34.635584   16907 main.go:141] libmachine: () Calling .GetVersion
	I1117 15:58:34.636067   16907 main.go:141] libmachine: Using API Version  1
	I1117 15:58:34.636086   16907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 15:58:34.636408   16907 main.go:141] libmachine: () Calling .GetMachineName
	I1117 15:58:34.636618   16907 main.go:141] libmachine: (addons-875867) Calling .GetState
	I1117 15:58:34.638444   16907 main.go:141] libmachine: (addons-875867) Calling .DriverName
	I1117 15:58:34.638708   16907 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1117 15:58:34.638732   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHHostname
	I1117 15:58:34.641657   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:34.642044   16907 main.go:141] libmachine: (addons-875867) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:45:2c", ip: ""} in network mk-addons-875867: {Iface:virbr1 ExpiryTime:2023-11-17 16:57:27 +0000 UTC Type:0 Mac:52:54:00:a8:45:2c Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-875867 Clientid:01:52:54:00:a8:45:2c}
	I1117 15:58:34.642079   16907 main.go:141] libmachine: (addons-875867) DBG | domain addons-875867 has defined IP address 192.168.39.118 and MAC address 52:54:00:a8:45:2c in network mk-addons-875867
	I1117 15:58:34.642200   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHPort
	I1117 15:58:34.642412   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHKeyPath
	I1117 15:58:34.642585   16907 main.go:141] libmachine: (addons-875867) Calling .GetSSHUsername
	I1117 15:58:34.642745   16907 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/addons-875867/id_rsa Username:docker}
	I1117 15:58:37.497866   16907 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.639350479s)
	I1117 15:58:37.497906   16907 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1117 15:58:37.497929   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.591645688s)
	I1117 15:58:37.497978   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.583594316s)
	I1117 15:58:37.497979   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498057   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498063   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.442988003s)
	I1117 15:58:37.498080   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498096   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498095   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.37043712s)
	I1117 15:58:37.498116   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498010   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498131   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498138   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498015   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.52219111s)
	I1117 15:58:37.498193   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498208   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498254   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.7489195s)
	I1117 15:58:37.498276   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498286   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498300   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.645832537s)
	I1117 15:58:37.498320   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498331   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498365   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.036696019s)
	I1117 15:58:37.498382   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498392   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498614   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.498468   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.892439373s)
	I1117 15:58:37.498666   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498676   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498677   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498686   16907 main.go:141] libmachine: Making call to close driver server
	W1117 15:58:37.498687   16907 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1117 15:58:37.498708   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.498708   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.498501   16907 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.634017475s)
	I1117 15:58:37.498723   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498732   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498734   16907 api_server.go:72] duration metric: took 11.240511003s to wait for apiserver process to appear ...
	I1117 15:58:37.498741   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498742   16907 api_server.go:88] waiting for apiserver healthz status ...
	I1117 15:58:37.498749   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498756   16907 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I1117 15:58:37.498691   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498795   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498804   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498804   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498814   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498825   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498533   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.498551   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.498833   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498569   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.299574848s)
	I1117 15:58:37.498880   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498890   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.498585   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498925   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498934   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.498942   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.499051   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.499079   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.499087   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.498637   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.499197   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.499226   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.499234   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.499242   16907 addons.go:467] Verifying addon metrics-server=true in "addons-875867"
	I1117 15:58:37.500405   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.500416   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.500425   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.500434   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.500439   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.500471   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.500480   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.500490   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.500492   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.500499   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.500517   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.500524   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.500533   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.500541   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.500785   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.500810   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.500817   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.501090   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.501102   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.501868   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.501900   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.501927   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.501936   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.502428   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.502454   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.502463   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.502470   16907 addons.go:467] Verifying addon registry=true in "addons-875867"
	I1117 15:58:37.504810   16907 out.go:177] * Verifying registry addon...
	I1117 15:58:37.502726   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.498713   16907 retry.go:31] will retry after 319.320582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1117 15:58:37.498694   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.502882   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.857872482s)
	I1117 15:58:37.502904   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.506564   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.506579   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.506595   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.506598   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.506617   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.506653   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.507447   16907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1117 15:58:37.508343   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.508347   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.508352   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.508361   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.508361   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.508366   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.508372   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.508379   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.508352   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.508381   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.508391   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.508619   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.508624   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.508633   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.508643   16907 addons.go:467] Verifying addon ingress=true in "addons-875867"
	I1117 15:58:37.511006   16907 out.go:177] * Verifying ingress addon...
	I1117 15:58:37.512768   16907 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1117 15:58:37.527538   16907 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I1117 15:58:37.530274   16907 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1117 15:58:37.530294   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:37.530273   16907 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1117 15:58:37.530401   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:37.530718   16907 api_server.go:141] control plane version: v1.28.3
	I1117 15:58:37.530739   16907 api_server.go:131] duration metric: took 31.989675ms to wait for apiserver health ...
	I1117 15:58:37.530748   16907 system_pods.go:43] waiting for kube-system pods to appear ...
	I1117 15:58:37.534810   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.534834   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.535119   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.535176   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:37.538451   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:37.538474   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:37.538983   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:37.539065   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:37.539091   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:37.539109   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	W1117 15:58:37.539189   16907 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1117 15:58:37.539591   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:37.542136   16907 system_pods.go:59] 15 kube-system pods found
	I1117 15:58:37.542158   16907 system_pods.go:61] "coredns-5dd5756b68-zrzps" [777cc657-0f39-4a46-bbc9-841dfe2d87c4] Running
	I1117 15:58:37.542163   16907 system_pods.go:61] "etcd-addons-875867" [8318f12c-5e57-4121-99bb-1bb6492f0a78] Running
	I1117 15:58:37.542168   16907 system_pods.go:61] "kube-apiserver-addons-875867" [8e39fa8b-13c2-4b60-834e-36b79fe33e09] Running
	I1117 15:58:37.542172   16907 system_pods.go:61] "kube-controller-manager-addons-875867" [3f02d3f4-ae12-4fc3-a4ee-304d5d102463] Running
	I1117 15:58:37.542179   16907 system_pods.go:61] "kube-ingress-dns-minikube" [39c17749-5e37-49af-9cf8-ebd41b673139] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1117 15:58:37.542187   16907 system_pods.go:61] "kube-proxy-fdr6g" [000cb3d0-a970-428a-bf1c-e01d6f0fa942] Running
	I1117 15:58:37.542192   16907 system_pods.go:61] "kube-scheduler-addons-875867" [096e24d9-fc0e-4315-ab30-aed0870a4050] Running
	I1117 15:58:37.542198   16907 system_pods.go:61] "metrics-server-7c66d45ddc-hfr5l" [d2b5b52e-fa8a-4480-82bb-d56913b91b9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1117 15:58:37.542210   16907 system_pods.go:61] "nvidia-device-plugin-daemonset-5djb5" [050355c6-9874-4f44-8207-1f8439bcd3de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1117 15:58:37.542219   16907 system_pods.go:61] "registry-k9t8h" [44b91e39-4b1c-4108-bc16-48f82c7e024b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1117 15:58:37.542228   16907 system_pods.go:61] "registry-proxy-n4tj4" [35711d0c-dfcc-486e-8ae7-4a798f559329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1117 15:58:37.542236   16907 system_pods.go:61] "snapshot-controller-58dbcc7b99-9xw2m" [dbf2baed-000a-414a-a6d1-108e6cc24a77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1117 15:58:37.542247   16907 system_pods.go:61] "snapshot-controller-58dbcc7b99-fqwnk" [885e88c2-fdbc-4eea-98b8-dbaa52c23b6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1117 15:58:37.542256   16907 system_pods.go:61] "storage-provisioner" [630b55c1-6fb3-4bdc-bb91-5cf301477b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1117 15:58:37.542263   16907 system_pods.go:61] "tiller-deploy-7b677967b9-dptq7" [7022573f-54a3-478b-9a95-ce1dc16d2b50] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1117 15:58:37.542270   16907 system_pods.go:74] duration metric: took 11.518187ms to wait for pod list to return data ...
	I1117 15:58:37.542277   16907 default_sa.go:34] waiting for default service account to be created ...
	I1117 15:58:37.545539   16907 default_sa.go:45] found service account: "default"
	I1117 15:58:37.545559   16907 default_sa.go:55] duration metric: took 3.276148ms for default service account to be created ...
	I1117 15:58:37.545565   16907 system_pods.go:116] waiting for k8s-apps to be running ...
	I1117 15:58:37.553416   16907 system_pods.go:86] 15 kube-system pods found
	I1117 15:58:37.553446   16907 system_pods.go:89] "coredns-5dd5756b68-zrzps" [777cc657-0f39-4a46-bbc9-841dfe2d87c4] Running
	I1117 15:58:37.553454   16907 system_pods.go:89] "etcd-addons-875867" [8318f12c-5e57-4121-99bb-1bb6492f0a78] Running
	I1117 15:58:37.553461   16907 system_pods.go:89] "kube-apiserver-addons-875867" [8e39fa8b-13c2-4b60-834e-36b79fe33e09] Running
	I1117 15:58:37.553467   16907 system_pods.go:89] "kube-controller-manager-addons-875867" [3f02d3f4-ae12-4fc3-a4ee-304d5d102463] Running
	I1117 15:58:37.553477   16907 system_pods.go:89] "kube-ingress-dns-minikube" [39c17749-5e37-49af-9cf8-ebd41b673139] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1117 15:58:37.553484   16907 system_pods.go:89] "kube-proxy-fdr6g" [000cb3d0-a970-428a-bf1c-e01d6f0fa942] Running
	I1117 15:58:37.553494   16907 system_pods.go:89] "kube-scheduler-addons-875867" [096e24d9-fc0e-4315-ab30-aed0870a4050] Running
	I1117 15:58:37.553505   16907 system_pods.go:89] "metrics-server-7c66d45ddc-hfr5l" [d2b5b52e-fa8a-4480-82bb-d56913b91b9e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1117 15:58:37.553519   16907 system_pods.go:89] "nvidia-device-plugin-daemonset-5djb5" [050355c6-9874-4f44-8207-1f8439bcd3de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1117 15:58:37.553531   16907 system_pods.go:89] "registry-k9t8h" [44b91e39-4b1c-4108-bc16-48f82c7e024b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1117 15:58:37.553549   16907 system_pods.go:89] "registry-proxy-n4tj4" [35711d0c-dfcc-486e-8ae7-4a798f559329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1117 15:58:37.553560   16907 system_pods.go:89] "snapshot-controller-58dbcc7b99-9xw2m" [dbf2baed-000a-414a-a6d1-108e6cc24a77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1117 15:58:37.553576   16907 system_pods.go:89] "snapshot-controller-58dbcc7b99-fqwnk" [885e88c2-fdbc-4eea-98b8-dbaa52c23b6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1117 15:58:37.553589   16907 system_pods.go:89] "storage-provisioner" [630b55c1-6fb3-4bdc-bb91-5cf301477b24] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1117 15:58:37.553601   16907 system_pods.go:89] "tiller-deploy-7b677967b9-dptq7" [7022573f-54a3-478b-9a95-ce1dc16d2b50] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1117 15:58:37.553617   16907 system_pods.go:126] duration metric: took 8.04261ms to wait for k8s-apps to be running ...
	I1117 15:58:37.553630   16907 system_svc.go:44] waiting for kubelet service to be running ....
	I1117 15:58:37.553679   16907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 15:58:37.826527   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1117 15:58:38.047004   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:38.047233   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:38.549405   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:38.570975   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:39.050074   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:39.051390   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:39.546267   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:39.547908   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:39.970248   16907 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.331515589s)
	I1117 15:58:39.970278   16907 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.416581109s)
	I1117 15:58:39.970294   16907 system_svc.go:56] duration metric: took 2.416662517s WaitForService to wait for kubelet.
	I1117 15:58:39.970302   16907 kubeadm.go:581] duration metric: took 13.712080783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1117 15:58:39.970325   16907 node_conditions.go:102] verifying NodePressure condition ...
	I1117 15:58:39.972280   16907 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1117 15:58:39.970249   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.118973545s)
	I1117 15:58:39.973777   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:39.973793   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:39.975447   16907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1117 15:58:39.974160   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:39.974189   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:39.977239   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:39.977270   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:39.977285   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:39.977244   16907 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1117 15:58:39.977370   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1117 15:58:39.977695   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:39.977756   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:39.977776   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:39.977791   16907 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-875867"
	I1117 15:58:39.979496   16907 out.go:177] * Verifying csi-hostpath-driver addon...
	I1117 15:58:39.981141   16907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1117 15:58:39.998872   16907 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1117 15:58:39.998901   16907 node_conditions.go:123] node cpu capacity is 2
	I1117 15:58:39.998912   16907 node_conditions.go:105] duration metric: took 28.582407ms to run NodePressure ...
	I1117 15:58:39.998923   16907 start.go:228] waiting for startup goroutines ...
	I1117 15:58:40.007193   16907 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1117 15:58:40.007216   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:40.026133   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:40.056496   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:40.059199   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:40.080770   16907 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1117 15:58:40.080795   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1117 15:58:40.148568   16907 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1117 15:58:40.148587   16907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1117 15:58:40.242815   16907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1117 15:58:40.534383   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:40.549913   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:40.551352   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:41.032231   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:41.045115   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:41.045387   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:41.490417   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.66384022s)
	I1117 15:58:41.490465   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:41.490477   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:41.490747   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:41.490816   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:41.490832   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:41.490845   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:41.490858   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:41.491137   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:41.491169   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:41.491189   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:41.532651   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:41.552073   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:41.552323   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:42.039096   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:42.045227   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:42.046845   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:42.568647   16907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.325792302s)
	I1117 15:58:42.568709   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:42.568726   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:42.569055   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:42.569076   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:42.569087   16907 main.go:141] libmachine: Making call to close driver server
	I1117 15:58:42.569098   16907 main.go:141] libmachine: (addons-875867) Calling .Close
	I1117 15:58:42.569112   16907 main.go:141] libmachine: (addons-875867) DBG | Closing plugin on server side
	I1117 15:58:42.569373   16907 main.go:141] libmachine: Successfully made call to close driver server
	I1117 15:58:42.569389   16907 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 15:58:42.570760   16907 addons.go:467] Verifying addon gcp-auth=true in "addons-875867"
	I1117 15:58:42.574301   16907 out.go:177] * Verifying gcp-auth addon...
	I1117 15:58:42.571189   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:42.571218   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:42.576635   16907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1117 15:58:42.577152   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:42.607903   16907 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1117 15:58:42.607922   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:42.610627   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:43.033216   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:43.046843   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:43.046949   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:43.115803   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:43.537580   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:43.550925   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:43.551036   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:43.616383   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:44.032169   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:44.045183   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:44.048765   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:44.114969   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:44.540939   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:44.546863   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:44.550027   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:44.616489   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:45.034018   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:45.045962   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:45.049674   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:45.115636   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:45.533342   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:45.546684   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:45.547591   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:45.614627   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:46.032011   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:46.045850   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:46.046241   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:46.115676   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:46.895060   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:46.896841   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:46.897600   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:46.899939   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:47.033364   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:47.045929   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:47.046198   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:47.115829   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:47.532773   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:47.546479   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:47.547247   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:47.616128   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:48.036271   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:48.048160   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:48.048358   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:48.115618   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:48.536275   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:48.557574   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:48.558232   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:48.624687   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:49.033369   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:49.079602   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:49.081086   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:49.116609   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:49.532842   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:49.544407   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:49.545467   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:49.615025   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:50.035243   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:50.045814   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:50.049078   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:50.114989   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:50.532501   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:50.545654   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:50.545846   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:50.617941   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:51.034008   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:51.050152   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:51.058206   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:51.121227   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:51.533299   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:51.545709   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:51.547817   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:51.615127   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:52.040168   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:52.045777   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:52.045910   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:52.548293   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:52.548525   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:52.552932   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:52.553298   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:52.649923   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:53.032861   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:53.045098   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:53.046215   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:53.114743   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:53.533680   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:53.545694   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:53.546026   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:53.614883   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:54.033017   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:54.046498   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:54.046771   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:54.115339   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:54.533386   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:54.544936   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:54.546408   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:54.616505   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:55.032692   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:55.045273   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:55.046949   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:55.115635   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:55.532660   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:55.544267   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:55.547407   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:55.618049   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:56.032306   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:56.047186   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:56.052034   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:56.115756   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:56.534055   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:56.550024   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:56.553113   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:56.616367   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:57.032337   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:57.046098   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:57.046230   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:57.116527   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:57.533251   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:57.545013   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:57.545327   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:57.618056   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:58.033342   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:58.046452   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:58.046687   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:58.116331   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:58.533599   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:58.545775   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:58.545938   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:58.616819   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:59.035105   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:59.045827   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:59.046382   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:59.115255   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:58:59.533838   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:58:59.545588   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:58:59.545639   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:58:59.614846   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:00.032498   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:00.046266   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:00.046656   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:00.121766   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:00.532413   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:00.547158   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:00.547529   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:00.615303   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:01.033158   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:01.045987   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:01.047758   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:01.118444   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:01.533840   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:01.546237   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:01.546707   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:01.616684   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:02.037559   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:02.044382   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:02.046526   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:02.116511   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:02.533714   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:02.545905   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:02.547773   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:02.617645   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:03.033381   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:03.048010   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:03.051416   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:03.115483   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:03.533348   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:03.544712   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:03.545625   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:03.615584   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:04.033056   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:04.049370   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:04.049675   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:04.134840   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:04.534497   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:04.544604   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:04.549025   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:04.622141   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:05.035363   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:05.044876   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:05.047076   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:05.115276   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:05.745093   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:05.747406   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:05.747972   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:05.748546   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:06.040194   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:06.044577   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:06.047600   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:06.116362   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:06.532469   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:06.546734   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:06.546929   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:06.615082   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:07.034071   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:07.046792   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:07.048746   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:07.115116   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:07.533750   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:07.547130   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:07.547790   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:07.617656   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:08.033343   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:08.051194   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:08.063273   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:08.118554   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:08.533918   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:08.546572   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:08.546838   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:08.615317   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:09.033568   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:09.044567   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:09.044937   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:09.115871   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:09.533878   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:09.544955   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:09.545311   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:09.615455   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:10.033766   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:10.044830   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:10.046037   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:10.115148   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:10.532143   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:10.546617   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:10.547060   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:10.615423   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:11.033015   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:11.045212   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:11.045863   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:11.122415   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:11.532539   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:11.544239   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:11.544475   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:11.615380   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:12.031887   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:12.044091   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:12.045250   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:12.114976   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:12.534019   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:12.545747   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:12.545747   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:12.615836   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:13.034600   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:13.046329   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:13.046707   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:13.118680   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:13.533068   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:13.544526   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:13.544881   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:13.618174   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:14.033110   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:14.048134   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:14.048224   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:14.115182   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:14.533117   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:14.544395   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:14.544721   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:14.619335   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:15.033856   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:15.048183   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:15.048228   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:15.116447   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:15.709673   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:15.709892   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:15.710001   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:15.711040   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:16.033250   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:16.044455   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:16.050229   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:16.115729   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:16.533823   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:16.544869   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:16.545942   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:16.615588   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:17.033502   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:17.045018   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:17.045098   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:17.118070   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:17.534825   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:17.548953   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:17.549324   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:17.616896   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:18.034421   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:18.046151   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:18.046617   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:18.117016   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:18.532217   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:18.546000   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:18.546625   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:18.617280   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:19.031940   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:19.045205   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:19.047524   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:19.114926   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:19.533743   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:19.544989   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:19.545368   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:19.614815   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:20.033730   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:20.045778   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:20.046104   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:20.117577   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:20.534558   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:20.544597   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:20.545662   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:20.616128   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:21.032336   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:21.046699   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:21.046819   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:21.115621   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:21.534589   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:21.547283   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:21.550444   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:21.615377   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:22.033011   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:22.047059   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1117 15:59:22.047491   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:22.122543   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:22.532288   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:22.546371   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:22.546531   16907 kapi.go:107] duration metric: took 45.039080454s to wait for kubernetes.io/minikube-addons=registry ...
	I1117 15:59:22.615257   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:23.032401   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:23.046781   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:23.115259   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:23.531785   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:23.544649   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:23.614963   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:24.036361   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:24.053719   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:24.114897   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:24.534319   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:24.545223   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:24.615234   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:25.031980   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:25.045458   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:25.115111   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:25.532574   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:25.550547   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:25.623966   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:26.036303   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:26.044253   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:26.117962   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:26.531941   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:26.552351   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:26.615537   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:27.032962   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:27.044690   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:27.117214   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:27.535803   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:27.544643   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:27.615645   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:28.034297   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:28.045377   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:28.115159   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:28.535586   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:28.545944   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:28.617753   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:29.035435   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:29.047121   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:29.115582   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:29.533138   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:29.545276   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:29.615906   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:30.033039   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:30.044973   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:30.115754   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:30.536285   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:30.545317   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:30.614914   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:31.033073   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:31.044753   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:31.114985   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:31.533613   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:31.545190   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:31.615233   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:32.033565   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:32.045850   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:32.115136   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:32.533694   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:32.543852   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:32.614899   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:33.034638   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:33.044625   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:33.115345   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:33.533337   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:33.544983   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:33.615767   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:34.034582   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:34.045944   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:34.116365   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:34.532970   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:34.545513   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:34.616686   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:35.033731   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:35.045015   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:35.115748   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:35.531624   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:35.546309   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:35.614833   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:36.036245   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:36.044484   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:36.114806   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:36.533153   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:36.546368   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:36.614722   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:37.033211   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:37.044976   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:37.117584   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:37.534031   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:37.544746   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:37.615752   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:38.032953   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1117 15:59:38.044966   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:38.115493   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:38.533339   16907 kapi.go:107] duration metric: took 58.552191659s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1117 15:59:38.545393   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:38.616237   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:39.046359   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:39.116170   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:39.545798   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:39.615825   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:40.044722   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:40.116031   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:40.545158   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:40.616108   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:41.045459   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:41.115149   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:41.545178   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:41.616082   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:42.047025   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:42.115630   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:42.544554   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:42.615601   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:43.045561   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:43.115826   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:43.546844   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:43.615792   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:44.044503   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:44.115055   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:44.546114   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:44.615576   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:45.044964   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:45.114799   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:45.547905   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:45.615867   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:46.044598   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:46.115756   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:46.547974   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:46.618984   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:47.045890   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:47.115616   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:47.547353   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:47.614929   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:48.045559   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:48.117007   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:48.545451   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:48.615659   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:49.047932   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:49.115381   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:49.546404   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:49.615776   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:50.044743   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:50.115561   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:50.545904   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:50.616866   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:51.045629   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:51.115647   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:51.545126   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:51.615256   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:52.045123   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:52.116585   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:52.549496   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:52.615027   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:53.047228   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:53.116063   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:53.547618   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:53.615694   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:54.045969   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:54.115406   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:54.545530   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:54.615397   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:55.045807   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:55.115463   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:55.544798   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:55.615028   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:56.044619   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:56.115347   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:56.546567   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:56.614965   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:57.045015   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:57.115541   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:57.544933   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:57.618611   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:58.045408   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:58.114866   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:58.545000   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:58.615914   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:59.044683   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:59.115260   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 15:59:59.545301   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 15:59:59.614605   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:00.046602   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:00.116132   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:00.544859   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:00.616063   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:01.045018   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:01.116281   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:01.545389   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:01.615347   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:02.045940   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:02.115882   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:02.546564   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:02.615868   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:03.051052   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:03.114743   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:03.545041   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:03.615777   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:04.044998   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:04.115573   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:04.545784   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:04.615471   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:05.047229   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:05.121456   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:05.544817   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:05.615608   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:06.045142   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:06.115678   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:06.544975   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:06.618034   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:07.045129   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:07.117307   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:07.545725   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:07.615231   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:08.049038   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:08.115657   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:08.545129   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:08.615668   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:09.044865   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:09.115267   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:09.545655   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:09.615150   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:10.046324   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:10.114995   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:10.544774   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:10.615161   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:11.045549   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:11.117091   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:11.545771   16907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1117 16:00:11.619868   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:12.044662   16907 kapi.go:107] duration metric: took 1m34.531888269s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1117 16:00:12.119282   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:12.616010   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:13.116192   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:13.616073   16907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1117 16:00:14.115456   16907 kapi.go:107] duration metric: took 1m31.538818357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1117 16:00:14.117163   16907 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-875867 cluster.
	I1117 16:00:14.118732   16907 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1117 16:00:14.120315   16907 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1117 16:00:14.121821   16907 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, nvidia-device-plugin, ingress-dns, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1117 16:00:14.123299   16907 addons.go:502] enable addons completed in 1m47.947559857s: enabled=[cloud-spanner storage-provisioner metrics-server helm-tiller inspektor-gadget nvidia-device-plugin ingress-dns default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1117 16:00:14.123341   16907 start.go:233] waiting for cluster config update ...
	I1117 16:00:14.123363   16907 start.go:242] writing updated cluster config ...
	I1117 16:00:14.123644   16907 ssh_runner.go:195] Run: rm -f paused
	I1117 16:00:14.179109   16907 start.go:600] kubectl: 1.28.4, cluster: 1.28.3 (minor skew: 0)
	I1117 16:00:14.181067   16907 out.go:177] * Done! kubectl is now configured to use "addons-875867" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c092f3229ef18       b135667c98980       2 seconds ago        Running             nginx                                    0                   ad21e5ff701e2       nginx
	ab0da72e2a631       98f6c3b32d565       8 seconds ago        Exited              helm-test                                0                   4b5ce0c302d5f       helm-test
	40c39cba85ea8       beae173ccac6a       10 seconds ago       Exited              registry-test                            0                   ac98f33af35b9       registry-test
	3d2f6e3152989       a416a98b71e22       14 seconds ago       Exited              helper-pod                               0                   9266e1d339521       helper-pod-delete-pvc-bb6db982-17de-4131-8663-4e9ae86f5bf3
	4113f70a8efe8       a416a98b71e22       20 seconds ago       Exited              busybox                                  0                   852ecbbc65706       test-local-path
	ada5ae33d9c06       a416a98b71e22       24 seconds ago       Exited              helper-pod                               0                   c2a43422f5afd       helper-pod-create-pvc-bb6db982-17de-4131-8663-4e9ae86f5bf3
	f7448aa37c70f       6d2a98b274382       29 seconds ago       Running             gcp-auth                                 0                   386e140f08fc5       gcp-auth-d4c87556c-zn96l
	1712fcd417e88       5aa0bf4798fa2       30 seconds ago       Running             controller                               0                   36aadf52edb97       ingress-nginx-controller-7c6974c4d8-q2cgc
	03466f20ff242       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	01394289a1abd       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	e1f80b392230b       e899260153aed       About a minute ago   Running             liveness-probe                           0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	210cfbbeebe7b       e255e073c508c       About a minute ago   Running             hostpath                                 0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	4474d7c984ba4       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	4426ddd51cb69       1ebff0f9671bc       About a minute ago   Exited              patch                                    1                   26020077718df       ingress-nginx-admission-patch-m7bdw
	558420fd31da4       1ebff0f9671bc       About a minute ago   Exited              create                                   0                   05149b8bee0e1       ingress-nginx-admission-create-k64k8
	1b2f0c61f55c0       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   65c719b20ee2b       snapshot-controller-58dbcc7b99-fqwnk
	618aae34df313       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   afc7a474b5807       local-path-provisioner-78b46b4d5c-csdh6
	d656e538a35b3       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   c02323033e554       csi-hostpath-resizer-0
	f5708c716eaf7       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   91b9465f473d5       csi-hostpath-attacher-0
	ad5d864d5ffda       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   3c6940b60a8fc       csi-hostpathplugin-k42s9
	4713fd5c495e2       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   60df9b3998ff6       snapshot-controller-58dbcc7b99-9xw2m
	8196943b8fa10       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   3f8aeccb7aef6       storage-provisioner
	931b907b86b23       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   4ed30a1cd78f5       kube-ingress-dns-minikube
	cbfe322e26036       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   19782024e0b23       coredns-5dd5756b68-zrzps
	596bfcb9ba124       bfc896cf80fba       2 minutes ago        Running             kube-proxy                               0                   4c5f6ca86d406       kube-proxy-fdr6g
	32d1553ff2f9a       6d1b4fd1b182d       2 minutes ago        Running             kube-scheduler                           0                   7db998dacc665       kube-scheduler-addons-875867
	286365e850bbd       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   bcdfc08cbfb8e       etcd-addons-875867
	664fabe8585b6       10baa1ca17068       2 minutes ago        Running             kube-controller-manager                  0                   a643a07da5fcc       kube-controller-manager-addons-875867
	ac43b07c435a4       5374347291230       2 minutes ago        Running             kube-apiserver                           0                   2f5e20791ba56       kube-apiserver-addons-875867
	
	* 
	* ==> containerd <==
	* -- Journal begins at Fri 2023-11-17 15:57:23 UTC, ends at Fri 2023-11-17 16:00:41 UTC. --
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.157774194Z" level=info msg="shim disconnected" id=079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.157858934Z" level=warning msg="cleaning up after shim disconnected" id=079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.157871713Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.169134963Z" level=info msg="StopContainer for \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\" returns successfully"
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.170023554Z" level=info msg="StopPodSandbox for \"b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75\""
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.170225738Z" level=info msg="Container to stop \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.229104041Z" level=info msg="StopContainer for \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\" returns successfully"
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.230924571Z" level=info msg="StopPodSandbox for \"8dab5a80c7c5cd1067b5111c95f5ea5f4f8720792866e9b62daf5bee99cfe543\""
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.231376572Z" level=info msg="Container to stop \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.272449190Z" level=info msg="shim disconnected" id=b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.272865464Z" level=warning msg="cleaning up after shim disconnected" id=b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.272998935Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.330793060Z" level=info msg="shim disconnected" id=8dab5a80c7c5cd1067b5111c95f5ea5f4f8720792866e9b62daf5bee99cfe543 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.331108485Z" level=warning msg="cleaning up after shim disconnected" id=8dab5a80c7c5cd1067b5111c95f5ea5f4f8720792866e9b62daf5bee99cfe543 namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.331264172Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.394218547Z" level=info msg="TearDown network for sandbox \"b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75\" successfully"
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.394358669Z" level=info msg="StopPodSandbox for \"b3b4606fb53930657fa6561c191ac6f20722cd848744b95d0bb413c8712baf75\" returns successfully"
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.442769596Z" level=info msg="TearDown network for sandbox \"8dab5a80c7c5cd1067b5111c95f5ea5f4f8720792866e9b62daf5bee99cfe543\" successfully"
	Nov 17 16:00:40 addons-875867 containerd[688]: time="2023-11-17T16:00:40.443228491Z" level=info msg="StopPodSandbox for \"8dab5a80c7c5cd1067b5111c95f5ea5f4f8720792866e9b62daf5bee99cfe543\" returns successfully"
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.002940168Z" level=info msg="RemoveContainer for \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\""
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.033139002Z" level=info msg="RemoveContainer for \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\" returns successfully"
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.039263889Z" level=error msg="ContainerStatus for \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\": not found"
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.042677485Z" level=info msg="RemoveContainer for \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\""
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.059504099Z" level=info msg="RemoveContainer for \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\" returns successfully"
	Nov 17 16:00:41 addons-875867 containerd[688]: time="2023-11-17T16:00:41.067369167Z" level=error msg="ContainerStatus for \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\": not found"
	
	* 
	* ==> coredns [cbfe322e2603666fcd8943407559fc80d35458a0f5c299d3f663f72f3132f5aa] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41218 - 34088 "HINFO IN 3793394979609935905.5064812161108123177. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010133424s
	[INFO] 10.244.0.20:50141 - 8767 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001362427s
	[INFO] 10.244.0.20:37775 - 47895 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004063894s
	[INFO] 10.244.0.20:48849 - 57398 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000461145s
	[INFO] 10.244.0.20:59484 - 56143 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012329s
	[INFO] 10.244.0.20:36482 - 44475 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151152s
	[INFO] 10.244.0.20:44845 - 34042 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000312523s
	[INFO] 10.244.0.20:43573 - 24440 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001472391s
	[INFO] 10.244.0.20:38603 - 37353 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000820132s
	[INFO] 10.244.0.24:50571 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000521412s
	[INFO] 10.244.0.24:53091 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150641s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-875867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-875867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49db7ae766960f8f9e07cffcbe974581755c3ae6
	                    minikube.k8s.io/name=addons-875867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_17T15_58_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-875867
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-875867"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Nov 2023 15:58:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-875867
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Nov 2023 16:00:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Nov 2023 16:00:16 +0000   Fri, 17 Nov 2023 15:58:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Nov 2023 16:00:16 +0000   Fri, 17 Nov 2023 15:58:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Nov 2023 16:00:16 +0000   Fri, 17 Nov 2023 15:58:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Nov 2023 16:00:16 +0000   Fri, 17 Nov 2023 15:58:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    addons-875867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 287ae663a094479b913d5c604956f4d0
	  System UUID:                287ae663-a094-479b-913d-5c604956f4d0
	  Boot ID:                    2e89c74b-b4d2-4975-ba5e-51b734c2cf09
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.9
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gcp-auth                    gcp-auth-d4c87556c-zn96l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-q2cgc    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m5s
	  kube-system                 coredns-5dd5756b68-zrzps                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m16s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 csi-hostpathplugin-k42s9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 etcd-addons-875867                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-apiserver-addons-875867                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-controller-manager-addons-875867        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-proxy-fdr6g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-addons-875867                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 snapshot-controller-58dbcc7b99-9xw2m         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 snapshot-controller-58dbcc7b99-fqwnk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  local-path-storage          local-path-provisioner-78b46b4d5c-csdh6      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m14s  kube-proxy       
	  Normal  Starting                 2m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m28s  kubelet          Node addons-875867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s  kubelet          Node addons-875867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s  kubelet          Node addons-875867 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m28s  kubelet          Node addons-875867 status is now: NodeReady
	  Normal  RegisteredNode           2m17s  node-controller  Node addons-875867 event: Registered Node addons-875867 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.603798] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.579559] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135424] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.004029] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.833054] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.116963] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.146649] systemd-fstab-generator[581]: Ignoring "noauto" for root device
	[  +0.103759] systemd-fstab-generator[592]: Ignoring "noauto" for root device
	[  +0.250800] systemd-fstab-generator[619]: Ignoring "noauto" for root device
	[  +6.230126] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[Nov17 15:58] systemd-fstab-generator[985]: Ignoring "noauto" for root device
	[  +9.277558] systemd-fstab-generator[1347]: Ignoring "noauto" for root device
	[ +18.917418] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.072632] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.052724] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.417656] kauditd_printk_skb: 4 callbacks suppressed
	[Nov17 15:59] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.410718] kauditd_printk_skb: 3 callbacks suppressed
	[Nov17 16:00] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.090113] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.117066] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.358040] kauditd_printk_skb: 32 callbacks suppressed
	
	* 
	* ==> etcd [286365e850bbdf350b9d76e7ac9760a0bcd890c8a558359fa6d4593791647dca] <==
	* {"level":"info","ts":"2023-11-17T15:59:05.733872Z","caller":"traceutil/trace.go:171","msg":"trace[1585718922] transaction","detail":"{read_only:false; response_revision:861; number_of_response:1; }","duration":"347.884146ms","start":"2023-11-17T15:59:05.385983Z","end":"2023-11-17T15:59:05.733867Z","steps":["trace[1585718922] 'process raft request'  (duration: 347.670363ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:05.73396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-17T15:59:05.385941Z","time spent":"347.977347ms","remote":"127.0.0.1:53496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":801,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-7c66d45ddc-hfr5l.1798744fca9861da\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-7c66d45ddc-hfr5l.1798744fca9861da\" value_size:706 lease:8152513536086197533 >> failure:<>"}
	{"level":"warn","ts":"2023-11-17T15:59:05.734237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.476189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81838"}
	{"level":"info","ts":"2023-11-17T15:59:05.734278Z","caller":"traceutil/trace.go:171","msg":"trace[1837427615] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:861; }","duration":"208.529391ms","start":"2023-11-17T15:59:05.52574Z","end":"2023-11-17T15:59:05.734269Z","steps":["trace[1837427615] 'agreement among raft nodes before linearized reading'  (duration: 208.203699ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:05.734345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.285724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2023-11-17T15:59:05.734369Z","caller":"traceutil/trace.go:171","msg":"trace[396183667] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:861; }","duration":"124.313981ms","start":"2023-11-17T15:59:05.610049Z","end":"2023-11-17T15:59:05.734363Z","steps":["trace[396183667] 'agreement among raft nodes before linearized reading'  (duration: 124.247419ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:05.734433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.365311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-17T15:59:05.734445Z","caller":"traceutil/trace.go:171","msg":"trace[1240998591] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:861; }","duration":"273.377454ms","start":"2023-11-17T15:59:05.461064Z","end":"2023-11-17T15:59:05.734441Z","steps":["trace[1240998591] 'agreement among raft nodes before linearized reading'  (duration: 273.355699ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:05.734498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.996958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2023-11-17T15:59:05.73452Z","caller":"traceutil/trace.go:171","msg":"trace[1202140113] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:861; }","duration":"196.021397ms","start":"2023-11-17T15:59:05.538493Z","end":"2023-11-17T15:59:05.734515Z","steps":["trace[1202140113] 'agreement among raft nodes before linearized reading'  (duration: 195.964391ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:05.734754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.261376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81838"}
	{"level":"info","ts":"2023-11-17T15:59:05.734772Z","caller":"traceutil/trace.go:171","msg":"trace[505146185] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:861; }","duration":"196.281109ms","start":"2023-11-17T15:59:05.538485Z","end":"2023-11-17T15:59:05.734766Z","steps":["trace[505146185] 'agreement among raft nodes before linearized reading'  (duration: 196.182262ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-17T15:59:15.684955Z","caller":"traceutil/trace.go:171","msg":"trace[146960725] linearizableReadLoop","detail":"{readStateIndex:935; appliedIndex:934; }","duration":"158.164337ms","start":"2023-11-17T15:59:15.526747Z","end":"2023-11-17T15:59:15.684912Z","steps":["trace[146960725] 'read index received'  (duration: 157.075313ms)","trace[146960725] 'applied index is now lower than readState.Index'  (duration: 1.087928ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-17T15:59:15.68613Z","caller":"traceutil/trace.go:171","msg":"trace[482141336] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"333.188686ms","start":"2023-11-17T15:59:15.352926Z","end":"2023-11-17T15:59:15.686114Z","steps":["trace[482141336] 'process raft request'  (duration: 331.493912ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:15.690851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.770094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81803"}
	{"level":"info","ts":"2023-11-17T15:59:15.69097Z","caller":"traceutil/trace.go:171","msg":"trace[790311307] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:911; }","duration":"151.891425ms","start":"2023-11-17T15:59:15.53905Z","end":"2023-11-17T15:59:15.690941Z","steps":["trace[790311307] 'agreement among raft nodes before linearized reading'  (duration: 147.091661ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:15.692257Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-17T15:59:15.352909Z","time spent":"335.593867ms","remote":"127.0.0.1:53518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5732,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-875867\" mod_revision:516 > success:<request_put:<key:\"/registry/minions/addons-875867\" value_size:5693 >> failure:<request_range:<key:\"/registry/minions/addons-875867\" > >"}
	{"level":"warn","ts":"2023-11-17T15:59:15.694866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.129084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81803"}
	{"level":"info","ts":"2023-11-17T15:59:15.695346Z","caller":"traceutil/trace.go:171","msg":"trace[227010886] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:911; }","duration":"168.606153ms","start":"2023-11-17T15:59:15.526716Z","end":"2023-11-17T15:59:15.695323Z","steps":["trace[227010886] 'agreement among raft nodes before linearized reading'  (duration: 167.977945ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:15.695935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.038697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2023-11-17T15:59:15.695998Z","caller":"traceutil/trace.go:171","msg":"trace[796937560] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:911; }","duration":"156.106589ms","start":"2023-11-17T15:59:15.539882Z","end":"2023-11-17T15:59:15.695989Z","steps":["trace[796937560] 'agreement among raft nodes before linearized reading'  (duration: 156.006122ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-17T15:59:22.520021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.121789ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17375885572940975124 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:916 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-17T15:59:22.520138Z","caller":"traceutil/trace.go:171","msg":"trace[819556866] transaction","detail":"{read_only:false; response_revision:936; number_of_response:1; }","duration":"142.928307ms","start":"2023-11-17T15:59:22.377162Z","end":"2023-11-17T15:59:22.52009Z","steps":["trace[819556866] 'compare'  (duration: 136.016308ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-17T15:59:38.905045Z","caller":"traceutil/trace.go:171","msg":"trace[1077480903] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"279.980464ms","start":"2023-11-17T15:59:38.625034Z","end":"2023-11-17T15:59:38.905015Z","steps":["trace[1077480903] 'process raft request'  (duration: 278.028832ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-17T16:00:32.203005Z","caller":"traceutil/trace.go:171","msg":"trace[1838697063] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"180.887251ms","start":"2023-11-17T16:00:32.022051Z","end":"2023-11-17T16:00:32.202938Z","steps":["trace[1838697063] 'process raft request'  (duration: 180.429685ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f7448aa37c70f71568e46170f9000a9e05840739d52ef49c97c19f7fd3c05465] <==
	* 2023/11/17 16:00:12 GCP Auth Webhook started!
	2023/11/17 16:00:15 Ready to marshal response ...
	2023/11/17 16:00:15 Ready to write response ...
	2023/11/17 16:00:15 Ready to marshal response ...
	2023/11/17 16:00:15 Ready to write response ...
	2023/11/17 16:00:23 Ready to marshal response ...
	2023/11/17 16:00:23 Ready to write response ...
	2023/11/17 16:00:24 Ready to marshal response ...
	2023/11/17 16:00:24 Ready to write response ...
	2023/11/17 16:00:25 Ready to marshal response ...
	2023/11/17 16:00:25 Ready to write response ...
	2023/11/17 16:00:25 Ready to marshal response ...
	2023/11/17 16:00:25 Ready to write response ...
	2023/11/17 16:00:36 Ready to marshal response ...
	2023/11/17 16:00:36 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  16:00:42 up 3 min,  0 users,  load average: 4.91, 2.04, 0.79
	Linux addons-875867 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ac43b07c435a45edacd7cb544e74bc8b2bf1ce72b83b9361616e1a8c130513bb] <==
	* I1117 15:58:37.212828       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.103.53.227"}
	I1117 15:58:37.258084       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.98.142.130"}
	I1117 15:58:37.308373       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W1117 15:58:38.617370       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1117 15:58:39.693950       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.101.173.122"}
	I1117 15:58:39.720391       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1117 15:58:39.888032       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.77.119"}
	W1117 15:58:40.868028       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1117 15:58:42.335256       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.222.170"}
	W1117 15:59:06.265149       1 handler_proxy.go:93] no RequestInfo found in the context
	E1117 15:59:06.265251       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1117 15:59:06.266396       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.124.209:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.124.209:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.124.209:443: connect: connection refused
	I1117 15:59:06.309915       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1117 15:59:06.334371       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1117 15:59:10.343040       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1117 16:00:10.347644       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1117 16:00:36.225147       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1117 16:00:36.483272       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.75.243"}
	I1117 16:00:37.315967       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1117 16:00:37.340259       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1117 16:00:38.378412       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1117 16:00:38.578725       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [664fabe8585b6d46e94c1858346e0b94e6c4635557de779f6158a6d85150b2f6] <==
	* I1117 16:00:03.037962       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1117 16:00:03.106141       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1117 16:00:03.112451       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1117 16:00:11.647503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="130.446µs"
	I1117 16:00:13.654680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="14.225758ms"
	I1117 16:00:13.655254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="80.132µs"
	I1117 16:00:14.919156       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1117 16:00:14.939321       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:14.940169       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:15.271150       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:15.271995       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:19.952784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="7.081µs"
	I1117 16:00:21.617633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="36.016859ms"
	I1117 16:00:21.617905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="63.572µs"
	I1117 16:00:23.031095       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:33.991284       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="43.306µs"
	I1117 16:00:35.680349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="7.153µs"
	E1117 16:00:38.389619       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1117 16:00:39.578086       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1117 16:00:39.578782       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1117 16:00:39.829284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5649c69bf6" duration="7.261µs"
	I1117 16:00:41.466409       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1117 16:00:41.472027       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1117 16:00:42.118912       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1117 16:00:42.119024       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [596bfcb9ba124bbc7653a75dd20f7006fb025a8a05c351e659a3ac2629c17bfe] <==
	* I1117 15:58:27.627386       1 server_others.go:69] "Using iptables proxy"
	I1117 15:58:27.638012       1 node.go:141] Successfully retrieved node IP: 192.168.39.118
	I1117 15:58:27.723600       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1117 15:58:27.723644       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1117 15:58:27.834099       1 server_others.go:152] "Using iptables Proxier"
	I1117 15:58:27.834175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1117 15:58:27.834375       1 server.go:846] "Version info" version="v1.28.3"
	I1117 15:58:27.834413       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1117 15:58:27.842755       1 config.go:188] "Starting service config controller"
	I1117 15:58:27.842888       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1117 15:58:27.846224       1 config.go:97] "Starting endpoint slice config controller"
	I1117 15:58:27.846404       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1117 15:58:27.864766       1 config.go:315] "Starting node config controller"
	I1117 15:58:27.864806       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1117 15:58:27.943728       1 shared_informer.go:318] Caches are synced for service config
	I1117 15:58:27.947365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1117 15:58:27.977927       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [32d1553ff2f9a3b0f8e234df29034f1808f1192811863e8e7a18d7ffc5e9797c] <==
	* E1117 15:58:10.437058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1117 15:58:10.437209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:10.437385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:10.437507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1117 15:58:10.437763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1117 15:58:10.437981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1117 15:58:10.438167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:10.438200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1117 15:58:11.271658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1117 15:58:11.271687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1117 15:58:11.324287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1117 15:58:11.324599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1117 15:58:11.370154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:11.370360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1117 15:58:11.414867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1117 15:58:11.414934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1117 15:58:11.537884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:11.537937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1117 15:58:11.622423       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1117 15:58:11.622485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1117 15:58:11.673689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1117 15:58:11.673741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1117 15:58:11.781403       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1117 15:58:11.781429       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1117 15:58:14.118251       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-11-17 15:57:23 UTC, ends at Fri 2023-11-17 16:00:42 UTC. --
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.419918    1354 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=2.514880792 podCreationTimestamp="2023-11-17 16:00:36 +0000 UTC" firstStartedPulling="2023-11-17 16:00:37.674108986 +0000 UTC m=+143.898171987" lastFinishedPulling="2023-11-17 16:00:39.579097787 +0000 UTC m=+145.803160790" observedRunningTime="2023-11-17 16:00:39.99014593 +0000 UTC m=+146.214208970" watchObservedRunningTime="2023-11-17 16:00:40.419869595 +0000 UTC m=+146.643932614"
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.572099    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/351dc9d0-446d-4fc9-b043-17d220adf88d-gcp-creds\") pod \"351dc9d0-446d-4fc9-b043-17d220adf88d\" (UID: \"351dc9d0-446d-4fc9-b043-17d220adf88d\") "
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.572146    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7cww\" (UniqueName: \"kubernetes.io/projected/c45ace11-9aee-4b12-b2cf-d911fa4a3cc5-kube-api-access-l7cww\") pod \"c45ace11-9aee-4b12-b2cf-d911fa4a3cc5\" (UID: \"c45ace11-9aee-4b12-b2cf-d911fa4a3cc5\") "
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.572172    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n472w\" (UniqueName: \"kubernetes.io/projected/351dc9d0-446d-4fc9-b043-17d220adf88d-kube-api-access-n472w\") pod \"351dc9d0-446d-4fc9-b043-17d220adf88d\" (UID: \"351dc9d0-446d-4fc9-b043-17d220adf88d\") "
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.572261    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6a3896db-8562-11ee-8ef6-3284b35e3dae\") pod \"351dc9d0-446d-4fc9-b043-17d220adf88d\" (UID: \"351dc9d0-446d-4fc9-b043-17d220adf88d\") "
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.572494    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/351dc9d0-446d-4fc9-b043-17d220adf88d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "351dc9d0-446d-4fc9-b043-17d220adf88d" (UID: "351dc9d0-446d-4fc9-b043-17d220adf88d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.587246    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351dc9d0-446d-4fc9-b043-17d220adf88d-kube-api-access-n472w" (OuterVolumeSpecName: "kube-api-access-n472w") pod "351dc9d0-446d-4fc9-b043-17d220adf88d" (UID: "351dc9d0-446d-4fc9-b043-17d220adf88d"). InnerVolumeSpecName "kube-api-access-n472w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.589084    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^6a3896db-8562-11ee-8ef6-3284b35e3dae" (OuterVolumeSpecName: "task-pv-storage") pod "351dc9d0-446d-4fc9-b043-17d220adf88d" (UID: "351dc9d0-446d-4fc9-b043-17d220adf88d"). InnerVolumeSpecName "pvc-c658918e-ec9d-44f6-9f97-c25b14b4a02c". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.591603    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45ace11-9aee-4b12-b2cf-d911fa4a3cc5-kube-api-access-l7cww" (OuterVolumeSpecName: "kube-api-access-l7cww") pod "c45ace11-9aee-4b12-b2cf-d911fa4a3cc5" (UID: "c45ace11-9aee-4b12-b2cf-d911fa4a3cc5"). InnerVolumeSpecName "kube-api-access-l7cww". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.673257    1354 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/351dc9d0-446d-4fc9-b043-17d220adf88d-gcp-creds\") on node \"addons-875867\" DevicePath \"\""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.673642    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l7cww\" (UniqueName: \"kubernetes.io/projected/c45ace11-9aee-4b12-b2cf-d911fa4a3cc5-kube-api-access-l7cww\") on node \"addons-875867\" DevicePath \"\""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.673721    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n472w\" (UniqueName: \"kubernetes.io/projected/351dc9d0-446d-4fc9-b043-17d220adf88d-kube-api-access-n472w\") on node \"addons-875867\" DevicePath \"\""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.673794    1354 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-c658918e-ec9d-44f6-9f97-c25b14b4a02c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6a3896db-8562-11ee-8ef6-3284b35e3dae\") on node \"addons-875867\" "
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.679426    1354 operation_generator.go:996] UnmountDevice succeeded for volume "pvc-c658918e-ec9d-44f6-9f97-c25b14b4a02c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^6a3896db-8562-11ee-8ef6-3284b35e3dae") on node "addons-875867"
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.775193    1354 reconciler_common.go:300] "Volume detached for volume \"pvc-c658918e-ec9d-44f6-9f97-c25b14b4a02c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6a3896db-8562-11ee-8ef6-3284b35e3dae\") on node \"addons-875867\" DevicePath \"\""
	Nov 17 16:00:40 addons-875867 kubelet[1354]: I1117 16:00:40.995124    1354 scope.go:117] "RemoveContainer" containerID="4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: I1117 16:00:41.038763    1354 scope.go:117] "RemoveContainer" containerID="4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: E1117 16:00:41.039591    1354 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\": not found" containerID="4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: I1117 16:00:41.039697    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72"} err="failed to get container status \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e449834814e492df405a5e2a20ee7ef555b9b899794cdf25be3ca2551055a72\": not found"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: I1117 16:00:41.039758    1354 scope.go:117] "RemoveContainer" containerID="079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: I1117 16:00:41.064905    1354 scope.go:117] "RemoveContainer" containerID="079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: E1117 16:00:41.072698    1354 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\": not found" containerID="079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35"
	Nov 17 16:00:41 addons-875867 kubelet[1354]: I1117 16:00:41.072787    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35"} err="failed to get container status \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\": rpc error: code = NotFound desc = an error occurred when try to find container \"079635683587971dfaae879b6fafe54dda221befaa0e8dc035475fe6ade27c35\": not found"
	Nov 17 16:00:42 addons-875867 kubelet[1354]: I1117 16:00:42.021385    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="351dc9d0-446d-4fc9-b043-17d220adf88d" path="/var/lib/kubelet/pods/351dc9d0-446d-4fc9-b043-17d220adf88d/volumes"
	Nov 17 16:00:42 addons-875867 kubelet[1354]: I1117 16:00:42.028222    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c45ace11-9aee-4b12-b2cf-d911fa4a3cc5" path="/var/lib/kubelet/pods/c45ace11-9aee-4b12-b2cf-d911fa4a3cc5/volumes"
	
	* 
	* ==> storage-provisioner [8196943b8fa1047dba40d738834aed68ad22d2ce81ee0f2557d67907a7a7299e] <==
	* I1117 15:59:00.062669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1117 15:59:00.080850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1117 15:59:00.081266       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1117 15:59:00.096596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1117 15:59:00.097050       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-875867_93b48546-dfc8-4e70-aa38-389de2e1bfe8!
	I1117 15:59:00.099884       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19b94510-e381-47b5-ae90-68b5e0b45d5e", APIVersion:"v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-875867_93b48546-dfc8-4e70-aa38-389de2e1bfe8 became leader
	I1117 15:59:00.198221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-875867_93b48546-dfc8-4e70-aa38-389de2e1bfe8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-875867 -n addons-875867
helpers_test.go:261: (dbg) Run:  kubectl --context addons-875867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-k64k8 ingress-nginx-admission-patch-m7bdw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-875867 describe pod ingress-nginx-admission-create-k64k8 ingress-nginx-admission-patch-m7bdw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-875867 describe pod ingress-nginx-admission-create-k64k8 ingress-nginx-admission-patch-m7bdw: exit status 1 (62.092539ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k64k8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-m7bdw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-875867 describe pod ingress-nginx-admission-create-k64k8 ingress-nginx-admission-patch-m7bdw: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.27s)

                                                
                                    
x
+
TestErrorSpam/setup (63.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-198649 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-198649 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-198649 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-198649 --driver=kvm2  --container-runtime=containerd: (1m3.829112318s)
error_spam_test.go:96: unexpected stderr: "X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3: no such file or directory"
error_spam_test.go:110: minikube stdout:
* [nospam-198649] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17634
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-198649 in cluster nospam-198649
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.3 on containerd 1.7.9 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "nospam-198649" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17634-9289/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3: no such file or directory
--- FAIL: TestErrorSpam/setup (63.83s)

                                                
                                    

Test pass (268/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.11
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 5.04
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.57
20 TestOffline 177.14
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 184.13
27 TestAddons/parallel/Registry 20
28 TestAddons/parallel/Ingress 20.21
29 TestAddons/parallel/InspektorGadget 11.16
30 TestAddons/parallel/MetricsServer 6.05
31 TestAddons/parallel/HelmTiller 15.6
33 TestAddons/parallel/CSI 51.24
35 TestAddons/parallel/CloudSpanner 5.75
36 TestAddons/parallel/LocalPath 11.51
37 TestAddons/parallel/NvidiaDevicePlugin 5.92
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/StoppedEnableDisable 92.46
42 TestCertOptions 111.97
43 TestCertExpiration 279.14
45 TestForceSystemdFlag 78.28
46 TestForceSystemdEnv 99.92
48 TestKVMDriverInstallOrUpdate 2.42
53 TestErrorSpam/start 0.39
54 TestErrorSpam/status 0.79
55 TestErrorSpam/pause 1.64
56 TestErrorSpam/unpause 1.9
57 TestErrorSpam/stop 1.58
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 119.61
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.9
69 TestFunctional/serial/CacheCmd/cache/add_local 1.39
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 41.4
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.6
80 TestFunctional/serial/LogsFileCmd 1.64
81 TestFunctional/serial/InvalidService 3.8
83 TestFunctional/parallel/ConfigCmd 0.43
84 TestFunctional/parallel/DashboardCmd 13.17
85 TestFunctional/parallel/DryRun 0.31
86 TestFunctional/parallel/InternationalLanguage 0.15
87 TestFunctional/parallel/StatusCmd 1
91 TestFunctional/parallel/ServiceCmdConnect 12.66
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 40.38
95 TestFunctional/parallel/SSHCmd 0.46
96 TestFunctional/parallel/CpCmd 0.93
97 TestFunctional/parallel/MySQL 30.75
98 TestFunctional/parallel/FileSync 0.26
99 TestFunctional/parallel/CertSync 1.48
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
107 TestFunctional/parallel/License 0.19
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 0.85
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
114 TestFunctional/parallel/ImageCommands/ImageBuild 4.46
115 TestFunctional/parallel/ImageCommands/Setup 1.08
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.96
129 TestFunctional/parallel/MountCmd/any-port 18.22
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.93
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.68
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.64
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
134 TestFunctional/parallel/MountCmd/specific-port 1.93
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.36
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.79
138 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
140 TestFunctional/parallel/ProfileCmd/profile_list 0.29
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
142 TestFunctional/parallel/ServiceCmd/List 1.32
143 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
145 TestFunctional/parallel/ServiceCmd/Format 0.42
146 TestFunctional/parallel/ServiceCmd/URL 0.37
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.02
153 TestIngressAddonLegacy/StartLegacyK8sCluster 110.79
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.61
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.13
160 TestJSONOutput/start/Command 82.12
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.71
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.66
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 2.1
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.23
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 140.04
192 TestMountStart/serial/StartWithMountFirst 28.03
193 TestMountStart/serial/VerifyMountFirst 0.43
194 TestMountStart/serial/StartWithMountSecond 27.76
195 TestMountStart/serial/VerifyMountSecond 0.41
196 TestMountStart/serial/DeleteFirst 0.68
197 TestMountStart/serial/VerifyMountPostDelete 0.42
198 TestMountStart/serial/Stop 1.2
199 TestMountStart/serial/RestartStopped 24.24
200 TestMountStart/serial/VerifyMountPostStop 0.41
203 TestMultiNode/serial/FreshStart2Nodes 145.34
204 TestMultiNode/serial/DeployApp2Nodes 4.35
205 TestMultiNode/serial/PingHostFrom2Pods 0.94
206 TestMultiNode/serial/AddNode 41.49
207 TestMultiNode/serial/ProfileList 0.23
208 TestMultiNode/serial/CopyFile 8.05
209 TestMultiNode/serial/StopNode 2.2
210 TestMultiNode/serial/StartAfterStop 27.56
211 TestMultiNode/serial/RestartKeepsNodes 321.05
212 TestMultiNode/serial/DeleteNode 1.65
213 TestMultiNode/serial/StopMultiNode 183.8
214 TestMultiNode/serial/RestartMultiNode 99.82
215 TestMultiNode/serial/ValidateNameConflict 69.59
220 TestPreload 246.08
222 TestScheduledStopUnix 139.1
226 TestRunningBinaryUpgrade 200.23
228 TestKubernetesUpgrade 205.61
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
232 TestNoKubernetes/serial/StartWithK8s 151.5
233 TestNoKubernetes/serial/StartWithStopK8s 17.07
234 TestNoKubernetes/serial/Start 29.08
235 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
236 TestNoKubernetes/serial/ProfileList 0.76
237 TestNoKubernetes/serial/Stop 1.27
238 TestNoKubernetes/serial/StartNoArgs 74.69
239 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
240 TestStoppedBinaryUpgrade/Setup 1.18
241 TestStoppedBinaryUpgrade/Upgrade 137.13
249 TestNetworkPlugins/group/false 3.93
261 TestPause/serial/Start 158.86
262 TestNetworkPlugins/group/auto/Start 149.8
263 TestNetworkPlugins/group/kindnet/Start 126.31
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
265 TestNetworkPlugins/group/calico/Start 165.19
266 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
267 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
268 TestNetworkPlugins/group/kindnet/NetCatPod 10.44
269 TestNetworkPlugins/group/auto/KubeletFlags 0.26
270 TestPause/serial/SecondStartNoReconfiguration 7.45
271 TestNetworkPlugins/group/auto/NetCatPod 11.58
272 TestPause/serial/Pause 0.74
273 TestPause/serial/VerifyStatus 0.27
274 TestPause/serial/Unpause 0.74
275 TestNetworkPlugins/group/kindnet/DNS 0.21
276 TestNetworkPlugins/group/kindnet/Localhost 0.16
277 TestNetworkPlugins/group/kindnet/HairPin 0.17
278 TestPause/serial/PauseAgain 0.94
279 TestPause/serial/DeletePaused 1.18
280 TestPause/serial/VerifyDeletedResources 0.55
281 TestNetworkPlugins/group/auto/DNS 0.19
282 TestNetworkPlugins/group/auto/Localhost 0.17
283 TestNetworkPlugins/group/auto/HairPin 0.21
284 TestNetworkPlugins/group/custom-flannel/Start 114.02
285 TestNetworkPlugins/group/enable-default-cni/Start 154.56
286 TestNetworkPlugins/group/flannel/Start 154.56
287 TestNetworkPlugins/group/calico/ControllerPod 5.03
288 TestNetworkPlugins/group/calico/KubeletFlags 0.43
289 TestNetworkPlugins/group/calico/NetCatPod 12.43
290 TestNetworkPlugins/group/calico/DNS 0.18
291 TestNetworkPlugins/group/calico/Localhost 0.15
292 TestNetworkPlugins/group/calico/HairPin 0.17
293 TestNetworkPlugins/group/bridge/Start 150.69
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
296 TestNetworkPlugins/group/custom-flannel/DNS 0.22
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
300 TestStartStop/group/old-k8s-version/serial/FirstStart 146.88
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.45
303 TestNetworkPlugins/group/flannel/ControllerPod 5.04
304 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
305 TestNetworkPlugins/group/flannel/NetCatPod 11.59
306 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
307 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
308 TestNetworkPlugins/group/enable-default-cni/HairPin 0.26
309 TestNetworkPlugins/group/flannel/DNS 0.22
310 TestNetworkPlugins/group/flannel/Localhost 0.17
311 TestNetworkPlugins/group/flannel/HairPin 0.19
313 TestStartStop/group/no-preload/serial/FirstStart 88.47
315 TestStartStop/group/embed-certs/serial/FirstStart 146.79
316 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
317 TestNetworkPlugins/group/bridge/NetCatPod 9.36
318 TestNetworkPlugins/group/bridge/DNS 0.24
319 TestNetworkPlugins/group/bridge/Localhost 0.2
320 TestNetworkPlugins/group/bridge/HairPin 0.2
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.36
323 TestStartStop/group/no-preload/serial/DeployApp 8.57
324 TestStartStop/group/old-k8s-version/serial/DeployApp 8.57
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.06
326 TestStartStop/group/no-preload/serial/Stop 92.01
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
328 TestStartStop/group/old-k8s-version/serial/Stop 92.64
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
331 TestStartStop/group/embed-certs/serial/DeployApp 8.46
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.84
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
334 TestStartStop/group/embed-certs/serial/Stop 91.86
335 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
336 TestStartStop/group/no-preload/serial/SecondStart 334.99
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
338 TestStartStop/group/old-k8s-version/serial/SecondStart 466.03
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 361.01
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
342 TestStartStop/group/embed-certs/serial/SecondStart 354.53
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
346 TestStartStop/group/no-preload/serial/Pause 2.96
348 TestStartStop/group/newest-cni/serial/FirstStart 90.85
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 22.03
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.32
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.6
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
355 TestStartStop/group/newest-cni/serial/Stop 7.13
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
358 TestStartStop/group/embed-certs/serial/Pause 3.26
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
361 TestStartStop/group/newest-cni/serial/SecondStart 46.73
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/old-k8s-version/serial/Pause 2.6
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
369 TestStartStop/group/newest-cni/serial/Pause 2.59
x
+
TestDownloadOnly/v1.16.0/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196672 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196672 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.111678778s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196672
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196672: exit status 85 (76.22009ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:56 UTC |          |
	|         | -p download-only-196672        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 15:56:53
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:56:53.733059   16550 out.go:296] Setting OutFile to fd 1 ...
	I1117 15:56:53.733307   16550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:56:53.733315   16550 out.go:309] Setting ErrFile to fd 2...
	I1117 15:56:53.733320   16550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:56:53.733511   16550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	W1117 15:56:53.733629   16550 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17634-9289/.minikube/config/config.json: open /home/jenkins/minikube-integration/17634-9289/.minikube/config/config.json: no such file or directory
	I1117 15:56:53.734204   16550 out.go:303] Setting JSON to true
	I1117 15:56:53.735087   16550 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2363,"bootTime":1700234251,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 15:56:53.735154   16550 start.go:138] virtualization: kvm guest
	I1117 15:56:53.737849   16550 out.go:97] [download-only-196672] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 15:56:53.739576   16550 out.go:169] MINIKUBE_LOCATION=17634
	W1117 15:56:53.737973   16550 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 15:56:53.738048   16550 notify.go:220] Checking for updates...
	I1117 15:56:53.742551   16550 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 15:56:53.744159   16550 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 15:56:53.745669   16550 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 15:56:53.747111   16550 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1117 15:56:53.749642   16550 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1117 15:56:53.749889   16550 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 15:56:53.850051   16550 out.go:97] Using the kvm2 driver based on user configuration
	I1117 15:56:53.850078   16550 start.go:298] selected driver: kvm2
	I1117 15:56:53.850086   16550 start.go:902] validating driver "kvm2" against <nil>
	I1117 15:56:53.850553   16550 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:56:53.850707   16550 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9289/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 15:56:53.865309   16550 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 15:56:53.865360   16550 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1117 15:56:53.865863   16550 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1117 15:56:53.866015   16550 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 15:56:53.866084   16550 cni.go:84] Creating CNI manager for ""
	I1117 15:56:53.866097   16550 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1117 15:56:53.866107   16550 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 15:56:53.866115   16550 start_flags.go:323] config:
	{Name:download-only-196672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-196672 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:56:53.866316   16550 iso.go:125] acquiring lock: {Name:mkc7f4527225ecf65fe1f10414ae202f7d6a2f67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:56:53.868423   16550 out.go:97] Downloading VM boot image ...
	I1117 15:56:53.868452   16550 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17634-9289/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1117 15:56:59.227499   16550 out.go:97] Starting control plane node download-only-196672 in cluster download-only-196672
	I1117 15:56:59.227518   16550 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1117 15:56:59.247555   16550 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1117 15:56:59.247594   16550 cache.go:56] Caching tarball of preloaded images
	I1117 15:56:59.247748   16550 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1117 15:56:59.249753   16550 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1117 15:56:59.249784   16550 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 15:56:59.282098   16550 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1117 15:57:02.395259   16550 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 15:57:02.395351   16550 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17634-9289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1117 15:57:03.306024   16550 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1117 15:57:03.306361   16550 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/download-only-196672/config.json ...
	I1117 15:57:03.306390   16550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/download-only-196672/config.json: {Name:mka985bb121d607ff666b2ca388dd0f730d5eca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:03.306547   16550 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1117 15:57:03.306745   16550 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17634-9289/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-196672"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196672 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196672 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.043077253s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196672
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196672: exit status 85 (75.589886ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:56 UTC |          |
	|         | -p download-only-196672        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-196672 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |          |
	|         | -p download-only-196672        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 15:57:03
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:57:03.927353   16609 out.go:296] Setting OutFile to fd 1 ...
	I1117 15:57:03.927636   16609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:03.927647   16609 out.go:309] Setting ErrFile to fd 2...
	I1117 15:57:03.927654   16609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:03.927868   16609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	W1117 15:57:03.928014   16609 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17634-9289/.minikube/config/config.json: open /home/jenkins/minikube-integration/17634-9289/.minikube/config/config.json: no such file or directory
	I1117 15:57:03.928451   16609 out.go:303] Setting JSON to true
	I1117 15:57:03.929275   16609 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2373,"bootTime":1700234251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 15:57:03.929341   16609 start.go:138] virtualization: kvm guest
	I1117 15:57:03.931631   16609 out.go:97] [download-only-196672] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 15:57:03.933453   16609 out.go:169] MINIKUBE_LOCATION=17634
	I1117 15:57:03.931825   16609 notify.go:220] Checking for updates...
	I1117 15:57:03.936521   16609 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 15:57:03.938151   16609 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 15:57:03.939596   16609 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 15:57:03.941147   16609 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-196672"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-196672
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-874673 --alsologtostderr --binary-mirror http://127.0.0.1:35083 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-874673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-874673
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (177.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-681119 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-681119 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m55.961545367s)
helpers_test.go:175: Cleaning up "offline-containerd-681119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-681119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-681119: (1.182132357s)
--- PASS: TestOffline (177.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-875867
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-875867: exit status 85 (62.858287ms)

                                                
                                                
-- stdout --
	* Profile "addons-875867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-875867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-875867
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-875867: exit status 85 (63.18096ms)

                                                
                                                
-- stdout --
	* Profile "addons-875867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-875867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (184.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-875867 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-875867 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m4.134332628s)
--- PASS: TestAddons/Setup (184.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 18.915617ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k9t8h" [44b91e39-4b1c-4108-bc16-48f82c7e024b] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020518564s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n4tj4" [35711d0c-dfcc-486e-8ae7-4a798f559329] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015918404s
addons_test.go:339: (dbg) Run:  kubectl --context addons-875867 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-875867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-875867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.008597084s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 ip
2023/11/17 16:00:33 [DEBUG] GET http://192.168.39.118:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-875867 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-875867 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-875867 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [724c969b-f782-4c90-9da1-48f285404f34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [724c969b-f782-4c90-9da1-48f285404f34] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.018408643s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-875867 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.118
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-875867 addons disable ingress-dns --alsologtostderr -v=1: (1.622462252s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-875867 addons disable ingress --alsologtostderr -v=1: (7.899144312s)
--- PASS: TestAddons/parallel/Ingress (20.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ljh5k" [5f5643f2-9ef5-41e1-9d22-7f1b0174c58a] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015097129s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-875867
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-875867: (6.141326397s)
--- PASS: TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 18.702101ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-hfr5l" [d2b5b52e-fa8a-4480-82bb-d56913b91b9e] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017951888s
addons_test.go:414: (dbg) Run:  kubectl --context addons-875867 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.05s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 5.092271ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-dptq7" [7022573f-54a3-478b-9a95-ce1dc16d2b50] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.026822626s
addons_test.go:472: (dbg) Run:  kubectl --context addons-875867 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-875867 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.870367341s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 18.880703ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-875867 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-875867 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [351dc9d0-446d-4fc9-b043-17d220adf88d] Pending
helpers_test.go:344: "task-pv-pod" [351dc9d0-446d-4fc9-b043-17d220adf88d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [351dc9d0-446d-4fc9-b043-17d220adf88d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.023805754s
addons_test.go:583: (dbg) Run:  kubectl --context addons-875867 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-875867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-875867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-875867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-875867 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-875867 delete pod task-pv-pod: (1.268338428s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-875867 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-875867 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-875867 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [972a7621-2742-45c1-8402-b723f378410c] Pending
helpers_test.go:344: "task-pv-pod-restore" [972a7621-2742-45c1-8402-b723f378410c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [972a7621-2742-45c1-8402-b723f378410c] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.033763764s
addons_test.go:625: (dbg) Run:  kubectl --context addons-875867 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-875867 delete pod task-pv-pod-restore: (1.320856174s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-875867 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-875867 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-875867 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.946339097s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-w7wr4" [c45ace11-9aee-4b12-b2cf-d911fa4a3cc5] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01643929s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-875867
--- PASS: TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-875867 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-875867 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4dd6b1a0-c835-486c-9828-ad9cc89bfb19] Pending
helpers_test.go:344: "test-local-path" [4dd6b1a0-c835-486c-9828-ad9cc89bfb19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4dd6b1a0-c835-486c-9828-ad9cc89bfb19] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4dd6b1a0-c835-486c-9828-ad9cc89bfb19] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.023908898s
addons_test.go:890: (dbg) Run:  kubectl --context addons-875867 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 ssh "cat /opt/local-path-provisioner/pvc-bb6db982-17de-4131-8663-4e9ae86f5bf3_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-875867 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-875867 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-875867 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5djb5" [050355c6-9874-4f44-8207-1f8439bcd3de] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.02148864s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-875867
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-875867 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-875867 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-875867
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-875867: (1m32.139668228s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-875867
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-875867
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-875867
--- PASS: TestAddons/StoppedEnableDisable (92.46s)

                                                
                                    
x
+
TestCertOptions (111.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-944287 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-944287 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m50.254925425s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-944287 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-944287 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-944287 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-944287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-944287
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-944287: (1.159719919s)
--- PASS: TestCertOptions (111.97s)

                                                
                                    
x
+
TestCertExpiration (279.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-795843 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-795843 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m12.013509194s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-795843 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-795843 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (25.934801485s)
helpers_test.go:175: Cleaning up "cert-expiration-795843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-795843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-795843: (1.194319286s)
--- PASS: TestCertExpiration (279.14s)

                                                
                                    
x
+
TestForceSystemdFlag (78.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682119 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682119 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m16.488782944s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682119 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-682119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-682119: (1.202294972s)
--- PASS: TestForceSystemdFlag (78.28s)

                                                
                                    
x
+
TestForceSystemdEnv (99.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-721229 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-721229 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m38.373859105s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-721229 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-721229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-721229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-721229: (1.154754637s)
--- PASS: TestForceSystemdEnv (99.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 stop: (1.405656881s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198649 --log_dir /tmp/nospam-198649 stop
--- PASS: TestErrorSpam/stop (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17634-9289/.minikube/files/etc/test/nested/copy/16538/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (119.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1117 16:05:14.191919   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.197741   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.208056   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.228386   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.268729   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.349150   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.509586   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:14.830251   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:15.471351   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:16.751865   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:19.312787   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:24.433925   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:34.674142   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:05:55.154933   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-857928 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m59.608467475s)
--- PASS: TestFunctional/serial/StartWithProxy (119.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-857928 --alsologtostderr -v=8: (5.995206126s)
functional_test.go:659: soft start took 5.995929338s for "functional-857928" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-857928 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:3.1: (1.251361264s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:3.3: (1.344839267s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 cache add registry.k8s.io/pause:latest: (1.307873518s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-857928 /tmp/TestFunctionalserialCacheCmdcacheadd_local2915536080/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache add minikube-local-cache-test:functional-857928
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 cache add minikube-local-cache-test:functional-857928: (1.030833668s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache delete minikube-local-cache-test:functional-857928
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-857928
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (237.193302ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 cache reload: (1.502677761s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 kubectl -- --context functional-857928 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-857928 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1117 16:06:36.115424   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-857928 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.40418427s)
functional_test.go:757: restart took 41.404307174s for "functional-857928" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-857928 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 logs: (1.603207502s)
--- PASS: TestFunctional/serial/LogsCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 logs --file /tmp/TestFunctionalserialLogsFileCmd1541402279/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 logs --file /tmp/TestFunctionalserialLogsFileCmd1541402279/001/logs.txt: (1.638915527s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-857928 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-857928
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-857928: exit status 115 (318.482657ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.82:31497 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-857928 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 config get cpus: exit status 14 (72.376265ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 config get cpus: exit status 14 (63.420377ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-857928 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-857928 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23788: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-857928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (154.65895ms)

                                                
                                                
-- stdout --
	* [functional-857928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:07:41.073880   23697 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:07:41.074157   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:41.074168   23697 out.go:309] Setting ErrFile to fd 2...
	I1117 16:07:41.074173   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:41.074387   23697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:07:41.075105   23697 out.go:303] Setting JSON to false
	I1117 16:07:41.076254   23697 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3010,"bootTime":1700234251,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:07:41.076342   23697 start.go:138] virtualization: kvm guest
	I1117 16:07:41.078387   23697 out.go:177] * [functional-857928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 16:07:41.080474   23697 notify.go:220] Checking for updates...
	I1117 16:07:41.080485   23697 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:07:41.082492   23697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:07:41.084318   23697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 16:07:41.085923   23697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 16:07:41.087381   23697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:07:41.088893   23697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:07:41.090996   23697 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:07:41.091426   23697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:07:41.091466   23697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:41.105958   23697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
	I1117 16:07:41.106380   23697 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:41.106961   23697 main.go:141] libmachine: Using API Version  1
	I1117 16:07:41.106986   23697 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:41.107350   23697 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:41.107524   23697 main.go:141] libmachine: (functional-857928) Calling .DriverName
	I1117 16:07:41.107787   23697 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:07:41.108172   23697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:07:41.108210   23697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:41.122304   23697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I1117 16:07:41.122710   23697 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:41.123253   23697 main.go:141] libmachine: Using API Version  1
	I1117 16:07:41.123272   23697 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:41.123628   23697 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:41.123809   23697 main.go:141] libmachine: (functional-857928) Calling .DriverName
	I1117 16:07:41.158630   23697 out.go:177] * Using the kvm2 driver based on existing profile
	I1117 16:07:41.160363   23697 start.go:298] selected driver: kvm2
	I1117 16:07:41.160377   23697 start.go:902] validating driver "kvm2" against &{Name:functional-857928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-857928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.82 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:07:41.160525   23697 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:07:41.163057   23697 out.go:177] 
	W1117 16:07:41.164766   23697 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 16:07:41.166254   23697 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-857928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-857928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (154.052255ms)

                                                
                                                
-- stdout --
	* [functional-857928] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:07:39.920141   23569 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:07:39.920275   23569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:39.920285   23569 out.go:309] Setting ErrFile to fd 2...
	I1117 16:07:39.920290   23569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:39.920601   23569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:07:39.921167   23569 out.go:303] Setting JSON to false
	I1117 16:07:39.922109   23569 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3009,"bootTime":1700234251,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:07:39.922170   23569 start.go:138] virtualization: kvm guest
	I1117 16:07:39.924746   23569 out.go:177] * [functional-857928] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1117 16:07:39.926754   23569 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:07:39.926835   23569 notify.go:220] Checking for updates...
	I1117 16:07:39.928514   23569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:07:39.930192   23569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 16:07:39.931686   23569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 16:07:39.933252   23569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:07:39.934821   23569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:07:39.936924   23569 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:07:39.937520   23569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:07:39.937612   23569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:39.951984   23569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I1117 16:07:39.952364   23569 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:39.952936   23569 main.go:141] libmachine: Using API Version  1
	I1117 16:07:39.952972   23569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:39.953358   23569 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:39.953572   23569 main.go:141] libmachine: (functional-857928) Calling .DriverName
	I1117 16:07:39.953798   23569 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:07:39.954074   23569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:07:39.954124   23569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:39.968451   23569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I1117 16:07:39.968942   23569 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:39.969410   23569 main.go:141] libmachine: Using API Version  1
	I1117 16:07:39.969438   23569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:39.969775   23569 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:39.969940   23569 main.go:141] libmachine: (functional-857928) Calling .DriverName
	I1117 16:07:40.004052   23569 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1117 16:07:40.005553   23569 start.go:298] selected driver: kvm2
	I1117 16:07:40.005563   23569 start.go:902] validating driver "kvm2" against &{Name:functional-857928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-857928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.82 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:07:40.005660   23569 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:07:40.007989   23569 out.go:177] 
	W1117 16:07:40.009439   23569 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 16:07:40.010914   23569 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-857928 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-857928 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ztzmn" [0331c09b-53a4-4e27-83ab-3b8190856ffd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ztzmn" [0331c09b-53a4-4e27-83ab-3b8190856ffd] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.024178204s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.82:30105
functional_test.go:1674: http://192.168.39.82:30105: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ztzmn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.82:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.82:30105
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [439a4dd9-0915-4b3c-80a9-daeb66106b24] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.027554144s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-857928 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-857928 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-857928 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-857928 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-857928 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38c21cf9-fdd7-4656-adf9-e3947210e424] Pending
helpers_test.go:344: "sp-pod" [38c21cf9-fdd7-4656-adf9-e3947210e424] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [38c21cf9-fdd7-4656-adf9-e3947210e424] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.043686916s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-857928 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-857928 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-857928 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c834cd4-6275-40d3-9cc5-3283ece30fc8] Pending
helpers_test.go:344: "sp-pod" [6c834cd4-6275-40d3-9cc5-3283ece30fc8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c834cd4-6275-40d3-9cc5-3283ece30fc8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.018639411s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-857928 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh -n functional-857928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 cp functional-857928:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1633541917/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh -n functional-857928 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-857928 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cgp5x" [54b820fb-f726-4c36-8047-62d70928fc1f] Pending
helpers_test.go:344: "mysql-859648c796-cgp5x" [54b820fb-f726-4c36-8047-62d70928fc1f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cgp5x" [54b820fb-f726-4c36-8047-62d70928fc1f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.037511424s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;": exit status 1 (196.33746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;": exit status 1 (364.05452ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;": exit status 1 (301.984925ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;": exit status 1 (331.610541ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-857928 exec mysql-859648c796-cgp5x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16538/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /etc/test/nested/copy/16538/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16538.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /etc/ssl/certs/16538.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16538.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /usr/share/ca-certificates/16538.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/165382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /etc/ssl/certs/165382.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/165382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /usr/share/ca-certificates/165382.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-857928 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "sudo systemctl is-active docker": exit status 1 (267.863646ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "sudo systemctl is-active crio": exit status 1 (243.737123ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857928 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-857928
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-857928
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857928 image ls --format short --alsologtostderr:
I1117 16:07:46.783688   24017 out.go:296] Setting OutFile to fd 1 ...
I1117 16:07:46.783894   24017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:46.783907   24017 out.go:309] Setting ErrFile to fd 2...
I1117 16:07:46.783917   24017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:46.784215   24017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
I1117 16:07:46.785030   24017 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:46.785182   24017 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:46.785803   24017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:46.785862   24017 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:46.800303   24017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
I1117 16:07:46.800830   24017 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:46.801377   24017 main.go:141] libmachine: Using API Version  1
I1117 16:07:46.801405   24017 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:46.801803   24017 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:46.801984   24017 main.go:141] libmachine: (functional-857928) Calling .GetState
I1117 16:07:46.803962   24017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:46.804013   24017 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:46.819972   24017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
I1117 16:07:46.820380   24017 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:46.820936   24017 main.go:141] libmachine: Using API Version  1
I1117 16:07:46.820961   24017 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:46.821249   24017 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:46.821423   24017 main.go:141] libmachine: (functional-857928) Calling .DriverName
I1117 16:07:46.821657   24017 ssh_runner.go:195] Run: systemctl --version
I1117 16:07:46.821687   24017 main.go:141] libmachine: (functional-857928) Calling .GetSSHHostname
I1117 16:07:46.824946   24017 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:46.825324   24017 main.go:141] libmachine: (functional-857928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:d9:b0", ip: ""} in network mk-functional-857928: {Iface:virbr1 ExpiryTime:2023-11-17 17:04:21 +0000 UTC Type:0 Mac:52:54:00:ea:d9:b0 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-857928 Clientid:01:52:54:00:ea:d9:b0}
I1117 16:07:46.825370   24017 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined IP address 192.168.39.82 and MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:46.825491   24017 main.go:141] libmachine: (functional-857928) Calling .GetSSHPort
I1117 16:07:46.825672   24017 main.go:141] libmachine: (functional-857928) Calling .GetSSHKeyPath
I1117 16:07:46.825844   24017 main.go:141] libmachine: (functional-857928) Calling .GetSSHUsername
I1117 16:07:46.825983   24017 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/functional-857928/id_rsa Username:docker}
I1117 16:07:46.914232   24017 ssh_runner.go:195] Run: sudo crictl images --output json
I1117 16:07:46.979907   24017 main.go:141] libmachine: Making call to close driver server
I1117 16:07:46.979919   24017 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:46.980162   24017 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:46.980187   24017 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:46.980195   24017 main.go:141] libmachine: Making call to close driver server
I1117 16:07:46.980200   24017 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
I1117 16:07:46.980203   24017 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:46.980436   24017 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:46.980454   24017 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:46.980477   24017 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857928 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:bfc896 | 24.6MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/nginx                     | latest             | sha256:c20060 | 70.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537434 | 34.7MB |
| docker.io/library/minikube-local-cache-test | functional-857928  | sha256:a78ff5 | 1.01kB |
| docker.io/library/mysql                     | 5.7                | sha256:bdba75 | 138MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/google-containers/addon-resizer      | functional-857928  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:10baa1 | 33.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:6d1b4f | 18.8MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857928 image ls --format table --alsologtostderr:
I1117 16:07:48.419624   24244 out.go:296] Setting OutFile to fd 1 ...
I1117 16:07:48.419770   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:48.419780   24244 out.go:309] Setting ErrFile to fd 2...
I1117 16:07:48.419787   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:48.419978   24244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
I1117 16:07:48.420749   24244 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:48.420918   24244 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:48.421422   24244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:48.421464   24244 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:48.435752   24244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
I1117 16:07:48.436182   24244 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:48.436804   24244 main.go:141] libmachine: Using API Version  1
I1117 16:07:48.436838   24244 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:48.437227   24244 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:48.437448   24244 main.go:141] libmachine: (functional-857928) Calling .GetState
I1117 16:07:48.439329   24244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:48.439368   24244 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:48.453645   24244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
I1117 16:07:48.454119   24244 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:48.454620   24244 main.go:141] libmachine: Using API Version  1
I1117 16:07:48.454662   24244 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:48.454984   24244 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:48.455187   24244 main.go:141] libmachine: (functional-857928) Calling .DriverName
I1117 16:07:48.455397   24244 ssh_runner.go:195] Run: systemctl --version
I1117 16:07:48.455418   24244 main.go:141] libmachine: (functional-857928) Calling .GetSSHHostname
I1117 16:07:48.458123   24244 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:48.458522   24244 main.go:141] libmachine: (functional-857928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:d9:b0", ip: ""} in network mk-functional-857928: {Iface:virbr1 ExpiryTime:2023-11-17 17:04:21 +0000 UTC Type:0 Mac:52:54:00:ea:d9:b0 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-857928 Clientid:01:52:54:00:ea:d9:b0}
I1117 16:07:48.458556   24244 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined IP address 192.168.39.82 and MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:48.458721   24244 main.go:141] libmachine: (functional-857928) Calling .GetSSHPort
I1117 16:07:48.458946   24244 main.go:141] libmachine: (functional-857928) Calling .GetSSHKeyPath
I1117 16:07:48.459080   24244 main.go:141] libmachine: (functional-857928) Calling .GetSSHUsername
I1117 16:07:48.459224   24244 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/functional-857928/id_rsa Username:docker}
I1117 16:07:48.586053   24244 ssh_runner.go:195] Run: sudo crictl images --output json
I1117 16:07:48.680182   24244 main.go:141] libmachine: Making call to close driver server
I1117 16:07:48.680200   24244 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:48.680525   24244 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
I1117 16:07:48.680575   24244 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:48.680593   24244 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:48.680612   24244 main.go:141] libmachine: Making call to close driver server
I1117 16:07:48.680621   24244 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:48.680841   24244 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:48.680858   24244 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:48.680936   24244 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857928 image ls --format json --alsologtostderr:
[{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"24561096"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10
c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"33404036"},{"id":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"18815674"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/c
oredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:a78ff53cc0cd0955fcc7262eba148c0071ff3b4d0e3182b17869d00c3761010c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-857928"],"size":"1006"},{"id":"sha256:bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:f5668
19f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909408"},{"id":"sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"70544532"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry
.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-857928"],"size":"10823156"},{"id":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"34666616"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857928 image ls --format json --alsologtostderr:
I1117 16:07:48.054487   24185 out.go:296] Setting OutFile to fd 1 ...
I1117 16:07:48.054675   24185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:48.054686   24185 out.go:309] Setting ErrFile to fd 2...
I1117 16:07:48.054694   24185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:48.055033   24185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
I1117 16:07:48.055941   24185 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:48.056102   24185 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:48.056724   24185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:48.056787   24185 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:48.072239   24185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
I1117 16:07:48.072691   24185 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:48.073345   24185 main.go:141] libmachine: Using API Version  1
I1117 16:07:48.073373   24185 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:48.073735   24185 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:48.073930   24185 main.go:141] libmachine: (functional-857928) Calling .GetState
I1117 16:07:48.076138   24185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:48.076192   24185 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:48.091510   24185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
I1117 16:07:48.092010   24185 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:48.092455   24185 main.go:141] libmachine: Using API Version  1
I1117 16:07:48.092476   24185 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:48.092861   24185 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:48.093022   24185 main.go:141] libmachine: (functional-857928) Calling .DriverName
I1117 16:07:48.093216   24185 ssh_runner.go:195] Run: systemctl --version
I1117 16:07:48.093253   24185 main.go:141] libmachine: (functional-857928) Calling .GetSSHHostname
I1117 16:07:48.096167   24185 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:48.096563   24185 main.go:141] libmachine: (functional-857928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:d9:b0", ip: ""} in network mk-functional-857928: {Iface:virbr1 ExpiryTime:2023-11-17 17:04:21 +0000 UTC Type:0 Mac:52:54:00:ea:d9:b0 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-857928 Clientid:01:52:54:00:ea:d9:b0}
I1117 16:07:48.096595   24185 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined IP address 192.168.39.82 and MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:48.096729   24185 main.go:141] libmachine: (functional-857928) Calling .GetSSHPort
I1117 16:07:48.096924   24185 main.go:141] libmachine: (functional-857928) Calling .GetSSHKeyPath
I1117 16:07:48.097080   24185 main.go:141] libmachine: (functional-857928) Calling .GetSSHUsername
I1117 16:07:48.097223   24185 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/functional-857928/id_rsa Username:docker}
I1117 16:07:48.229980   24185 ssh_runner.go:195] Run: sudo crictl images --output json
I1117 16:07:48.352941   24185 main.go:141] libmachine: Making call to close driver server
I1117 16:07:48.352964   24185 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:48.353265   24185 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:48.353287   24185 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:48.353297   24185 main.go:141] libmachine: Making call to close driver server
I1117 16:07:48.353301   24185 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
I1117 16:07:48.353307   24185 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:48.353541   24185 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
I1117 16:07:48.353617   24185 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:48.353653   24185 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-857928 image ls --format yaml --alsologtostderr:
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "34666616"
- id: sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "33404036"
- id: sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "24561096"
- id: sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "18815674"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "137909408"
- id: sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
repoTags:
- docker.io/library/nginx:latest
size: "70544532"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-857928
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a78ff53cc0cd0955fcc7262eba148c0071ff3b4d0e3182b17869d00c3761010c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-857928
size: "1006"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857928 image ls --format yaml --alsologtostderr:
I1117 16:07:47.041637   24040 out.go:296] Setting OutFile to fd 1 ...
I1117 16:07:47.041939   24040 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:47.041949   24040 out.go:309] Setting ErrFile to fd 2...
I1117 16:07:47.041954   24040 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:47.042209   24040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
I1117 16:07:47.042851   24040 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:47.042974   24040 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:47.043411   24040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:47.043479   24040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:47.058428   24040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
I1117 16:07:47.058908   24040 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:47.059507   24040 main.go:141] libmachine: Using API Version  1
I1117 16:07:47.059546   24040 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:47.059936   24040 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:47.060125   24040 main.go:141] libmachine: (functional-857928) Calling .GetState
I1117 16:07:47.062015   24040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:47.062063   24040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:47.078002   24040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
I1117 16:07:47.078419   24040 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:47.078991   24040 main.go:141] libmachine: Using API Version  1
I1117 16:07:47.079023   24040 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:47.079319   24040 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:47.079549   24040 main.go:141] libmachine: (functional-857928) Calling .DriverName
I1117 16:07:47.079766   24040 ssh_runner.go:195] Run: systemctl --version
I1117 16:07:47.079805   24040 main.go:141] libmachine: (functional-857928) Calling .GetSSHHostname
I1117 16:07:47.082583   24040 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:47.082963   24040 main.go:141] libmachine: (functional-857928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:d9:b0", ip: ""} in network mk-functional-857928: {Iface:virbr1 ExpiryTime:2023-11-17 17:04:21 +0000 UTC Type:0 Mac:52:54:00:ea:d9:b0 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-857928 Clientid:01:52:54:00:ea:d9:b0}
I1117 16:07:47.082999   24040 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined IP address 192.168.39.82 and MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:47.083253   24040 main.go:141] libmachine: (functional-857928) Calling .GetSSHPort
I1117 16:07:47.083456   24040 main.go:141] libmachine: (functional-857928) Calling .GetSSHKeyPath
I1117 16:07:47.083638   24040 main.go:141] libmachine: (functional-857928) Calling .GetSSHUsername
I1117 16:07:47.083780   24040 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/functional-857928/id_rsa Username:docker}
I1117 16:07:47.169351   24040 ssh_runner.go:195] Run: sudo crictl images --output json
I1117 16:07:47.233345   24040 main.go:141] libmachine: Making call to close driver server
I1117 16:07:47.233367   24040 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:47.233650   24040 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:47.233668   24040 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:47.233677   24040 main.go:141] libmachine: Making call to close driver server
I1117 16:07:47.233691   24040 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:47.233943   24040 main.go:141] libmachine: (functional-857928) DBG | Closing plugin on server side
I1117 16:07:47.233959   24040 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:47.233972   24040 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh pgrep buildkitd: exit status 1 (209.379653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image build -t localhost/my-image:functional-857928 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image build -t localhost/my-image:functional-857928 testdata/build --alsologtostderr: (4.006748456s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-857928 image build -t localhost/my-image:functional-857928 testdata/build --alsologtostderr:
I1117 16:07:47.520244   24094 out.go:296] Setting OutFile to fd 1 ...
I1117 16:07:47.520507   24094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:47.520522   24094 out.go:309] Setting ErrFile to fd 2...
I1117 16:07:47.520530   24094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:07:47.520888   24094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
I1117 16:07:47.521888   24094 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:47.522540   24094 config.go:182] Loaded profile config "functional-857928": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1117 16:07:47.523044   24094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:47.523090   24094 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:47.539963   24094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
I1117 16:07:47.540500   24094 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:47.541102   24094 main.go:141] libmachine: Using API Version  1
I1117 16:07:47.541147   24094 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:47.541520   24094 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:47.541747   24094 main.go:141] libmachine: (functional-857928) Calling .GetState
I1117 16:07:47.543980   24094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1117 16:07:47.544032   24094 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:07:47.561280   24094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
I1117 16:07:47.561745   24094 main.go:141] libmachine: () Calling .GetVersion
I1117 16:07:47.562351   24094 main.go:141] libmachine: Using API Version  1
I1117 16:07:47.562385   24094 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:07:47.562722   24094 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:07:47.562913   24094 main.go:141] libmachine: (functional-857928) Calling .DriverName
I1117 16:07:47.563098   24094 ssh_runner.go:195] Run: systemctl --version
I1117 16:07:47.563123   24094 main.go:141] libmachine: (functional-857928) Calling .GetSSHHostname
I1117 16:07:47.566475   24094 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:47.566901   24094 main.go:141] libmachine: (functional-857928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:d9:b0", ip: ""} in network mk-functional-857928: {Iface:virbr1 ExpiryTime:2023-11-17 17:04:21 +0000 UTC Type:0 Mac:52:54:00:ea:d9:b0 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:functional-857928 Clientid:01:52:54:00:ea:d9:b0}
I1117 16:07:47.566997   24094 main.go:141] libmachine: (functional-857928) DBG | domain functional-857928 has defined IP address 192.168.39.82 and MAC address 52:54:00:ea:d9:b0 in network mk-functional-857928
I1117 16:07:47.567150   24094 main.go:141] libmachine: (functional-857928) Calling .GetSSHPort
I1117 16:07:47.567317   24094 main.go:141] libmachine: (functional-857928) Calling .GetSSHKeyPath
I1117 16:07:47.567473   24094 main.go:141] libmachine: (functional-857928) Calling .GetSSHUsername
I1117 16:07:47.567606   24094 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/functional-857928/id_rsa Username:docker}
I1117 16:07:47.674895   24094 build_images.go:151] Building image from path: /tmp/build.3636509584.tar
I1117 16:07:47.674970   24094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1117 16:07:47.693238   24094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3636509584.tar
I1117 16:07:47.705656   24094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3636509584.tar: stat -c "%s %y" /var/lib/minikube/build/build.3636509584.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3636509584.tar': No such file or directory
I1117 16:07:47.705699   24094 ssh_runner.go:362] scp /tmp/build.3636509584.tar --> /var/lib/minikube/build/build.3636509584.tar (3072 bytes)
I1117 16:07:47.770369   24094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3636509584
I1117 16:07:47.837340   24094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3636509584 -xf /var/lib/minikube/build/build.3636509584.tar
I1117 16:07:47.859866   24094 containerd.go:378] Building image: /var/lib/minikube/build/build.3636509584
I1117 16:07:47.859945   24094 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3636509584 --local dockerfile=/var/lib/minikube/build/build.3636509584 --output type=image,name=localhost/my-image:functional-857928
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d870a72cf2a81e1d5980d3d92bee149f324522a65dadd36d23e3388b1a0d8e34 0.0s done
#8 exporting config sha256:3aeed87f2c62a2d53ee9242aeb7f01b03992e4ef0a97e172622d25ec310bb267 0.0s done
#8 naming to localhost/my-image:functional-857928
#8 naming to localhost/my-image:functional-857928 done
#8 DONE 0.2s
I1117 16:07:51.402359   24094 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3636509584 --local dockerfile=/var/lib/minikube/build/build.3636509584 --output type=image,name=localhost/my-image:functional-857928: (3.542372887s)
I1117 16:07:51.402432   24094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3636509584
I1117 16:07:51.416876   24094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3636509584.tar
I1117 16:07:51.449870   24094 build_images.go:207] Built localhost/my-image:functional-857928 from /tmp/build.3636509584.tar
I1117 16:07:51.449903   24094 build_images.go:123] succeeded building to: functional-857928
I1117 16:07:51.449906   24094 build_images.go:124] failed building to: 
I1117 16:07:51.449924   24094 main.go:141] libmachine: Making call to close driver server
I1117 16:07:51.449932   24094 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:51.450191   24094 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:51.450205   24094 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:07:51.450214   24094 main.go:141] libmachine: Making call to close driver server
I1117 16:07:51.450222   24094 main.go:141] libmachine: (functional-857928) Calling .Close
I1117 16:07:51.450461   24094 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:07:51.450480   24094 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
2023/11/17 16:07:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.055887395s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-857928
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr: (5.719346749s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdany-port1332351920/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1700237230795257197" to /tmp/TestFunctionalparallelMountCmdany-port1332351920/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1700237230795257197" to /tmp/TestFunctionalparallelMountCmdany-port1332351920/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1700237230795257197" to /tmp/TestFunctionalparallelMountCmdany-port1332351920/001/test-1700237230795257197
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.152309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 test-1700237230795257197
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh cat /mount-9p/test-1700237230795257197
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-857928 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [03845717-f639-4dad-962c-c8380c98094f] Pending
helpers_test.go:344: "busybox-mount" [03845717-f639-4dad-962c-c8380c98094f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [03845717-f639-4dad-962c-c8380c98094f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [03845717-f639-4dad-962c-c8380c98094f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.03054093s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-857928 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdany-port1332351920/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr: (4.68710919s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-857928
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image load --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr: (6.447396508s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image save gcr.io/google-containers/addon-resizer:functional-857928 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image save gcr.io/google-containers/addon-resizer:functional-857928 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.639740015s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image rm gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdspecific-port3349840097/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.648728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdspecific-port3349840097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "sudo umount -f /mount-9p": exit status 1 (218.19709ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-857928 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdspecific-port3349840097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.998804101s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T" /mount1: exit status 1 (365.042555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-857928 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-857928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup941096616/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-857928
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 image save --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 image save --daemon gcr.io/google-containers/addon-resizer:functional-857928 --alsologtostderr: (1.751161043s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-857928
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-857928 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-857928 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6rpn7" [8dd53eb2-e397-45cd-9e72-940dee8957b1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6rpn7" [8dd53eb2-e397-45cd-9e72-940dee8957b1] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.018817117s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "216.072483ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "70.646588ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "249.597662ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.330798ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 service list: (1.32150576s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-857928 service list -o json: (1.296930388s)
functional_test.go:1493: Took "1.297027139s" to run "out/minikube-linux-amd64 -p functional-857928 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.82:30086
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-857928 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.82:30086
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-857928
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-857928
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-857928
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-670547 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1117 16:07:58.036392   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-670547 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m50.79070523s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons enable ingress --alsologtostderr -v=5: (10.607781919s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-670547 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1117 16:10:14.191044   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-670547 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.674083441s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-670547 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-670547 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7ef5c343-04e9-439e-a2ad-f3182895a1f2] Pending
helpers_test.go:344: "nginx" [7ef5c343-04e9-439e-a2ad-f3182895a1f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7ef5c343-04e9-439e-a2ad-f3182895a1f2] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.028960869s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-670547 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.117
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons disable ingress-dns --alsologtostderr -v=1: (2.618993561s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-670547 addons disable ingress --alsologtostderr -v=1: (7.617276451s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-796986 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1117 16:10:41.878919   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-796986 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.120168004s)
--- PASS: TestJSONOutput/start/Command (82.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-796986 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-796986 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-796986 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-796986 --output=json --user=testUser: (2.098974574s)
--- PASS: TestJSONOutput/stop/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-102790 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-102790 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.09368ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d325c17-27cb-40e3-83a7-8caf3ce6f000","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102790] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8d43736-4b28-4e30-a943-3518e056a91c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17634"}}
	{"specversion":"1.0","id":"450b1461-c98c-4754-b16f-898044278a01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45d43a08-7cdb-40a7-8a42-a04317aa73ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig"}}
	{"specversion":"1.0","id":"d256f1de-4a0f-4b0d-801c-9e78c8a84add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube"}}
	{"specversion":"1.0","id":"ce36f6ca-0a0a-4025-a4cd-4922c21f8565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b997eccc-ca90-4f87-b28d-ed82db8abd05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"646441a2-cc0f-4387-90c2-4fae9baaf836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-102790
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (140.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-780258 --driver=kvm2  --container-runtime=containerd
E1117 16:12:07.621973   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.627278   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.637634   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.657911   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.698251   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.778566   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:07.939129   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:08.259769   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:08.900881   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:10.181448   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:12.742772   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:17.863314   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:28.104142   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:12:48.584769   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-780258 --driver=kvm2  --container-runtime=containerd: (1m5.151281617s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-782269 --driver=kvm2  --container-runtime=containerd
E1117 16:13:29.545649   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-782269 --driver=kvm2  --container-runtime=containerd: (1m11.821327624s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-780258
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-782269
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-782269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-782269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-782269: (1.032630019s)
helpers_test.go:175: Cleaning up "first-780258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-780258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-780258: (1.097472031s)
--- PASS: TestMinikubeProfile (140.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-562883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1117 16:14:51.466105   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-562883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.028675806s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-562883 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-562883 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-584033 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1117 16:14:57.878177   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:57.883545   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:57.893884   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:57.914325   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:57.954667   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:58.035063   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:58.195520   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:58.516133   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:14:59.157156   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:15:00.437685   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:15:02.998861   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:15:08.119975   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:15:14.191750   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:15:18.360298   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-584033 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.757020339s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-562883 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-584033
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-584033: (1.19765163s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-584033
E1117 16:15:38.840484   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-584033: (23.244608978s)
--- PASS: TestMountStart/serial/RestartStopped (24.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-584033 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (145.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327913 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1117 16:16:19.801607   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:17:07.619105   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:17:35.306731   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:17:41.724063   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327913 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m24.89058494s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (145.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-327913 -- rollout status deployment/busybox: (2.536391828s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-d4vxk -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-rfbn8 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-d4vxk -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-rfbn8 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-d4vxk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-rfbn8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-d4vxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-d4vxk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-rfbn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327913 -- exec busybox-5bc68d56bd-rfbn8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-327913 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-327913 -v 3 --alsologtostderr: (40.877560073s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.49s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp testdata/cp-test.txt multinode-327913:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3320880946/001/cp-test_multinode-327913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913:/home/docker/cp-test.txt multinode-327913-m02:/home/docker/cp-test_multinode-327913_multinode-327913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test_multinode-327913_multinode-327913-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913:/home/docker/cp-test.txt multinode-327913-m03:/home/docker/cp-test_multinode-327913_multinode-327913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test_multinode-327913_multinode-327913-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp testdata/cp-test.txt multinode-327913-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3320880946/001/cp-test_multinode-327913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m02:/home/docker/cp-test.txt multinode-327913:/home/docker/cp-test_multinode-327913-m02_multinode-327913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test_multinode-327913-m02_multinode-327913.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m02:/home/docker/cp-test.txt multinode-327913-m03:/home/docker/cp-test_multinode-327913-m02_multinode-327913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test_multinode-327913-m02_multinode-327913-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp testdata/cp-test.txt multinode-327913-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3320880946/001/cp-test_multinode-327913-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m03:/home/docker/cp-test.txt multinode-327913:/home/docker/cp-test_multinode-327913-m03_multinode-327913.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913 "sudo cat /home/docker/cp-test_multinode-327913-m03_multinode-327913.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 cp multinode-327913-m03:/home/docker/cp-test.txt multinode-327913-m02:/home/docker/cp-test_multinode-327913-m03_multinode-327913-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 ssh -n multinode-327913-m02 "sudo cat /home/docker/cp-test_multinode-327913-m03_multinode-327913-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-327913 node stop m03: (1.266639118s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327913 status: exit status 7 (481.148447ms)

                                                
                                                
-- stdout --
	multinode-327913
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327913-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327913-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr: exit status 7 (455.187801ms)

                                                
                                                
-- stdout --
	multinode-327913
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327913-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327913-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:19:11.381052   31346 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:19:11.381182   31346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:19:11.381190   31346 out.go:309] Setting ErrFile to fd 2...
	I1117 16:19:11.381195   31346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:19:11.381396   31346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:19:11.381564   31346 out.go:303] Setting JSON to false
	I1117 16:19:11.381600   31346 mustload.go:65] Loading cluster: multinode-327913
	I1117 16:19:11.381692   31346 notify.go:220] Checking for updates...
	I1117 16:19:11.381960   31346 config.go:182] Loaded profile config "multinode-327913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:19:11.381972   31346 status.go:255] checking status of multinode-327913 ...
	I1117 16:19:11.382378   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.382428   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.397383   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I1117 16:19:11.397837   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.398373   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.398397   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.398812   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.399041   31346 main.go:141] libmachine: (multinode-327913) Calling .GetState
	I1117 16:19:11.400684   31346 status.go:330] multinode-327913 host status = "Running" (err=<nil>)
	I1117 16:19:11.400701   31346 host.go:66] Checking if "multinode-327913" exists ...
	I1117 16:19:11.400984   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.401020   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.416084   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I1117 16:19:11.416559   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.417023   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.417044   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.417384   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.417586   31346 main.go:141] libmachine: (multinode-327913) Calling .GetIP
	I1117 16:19:11.420951   31346 main.go:141] libmachine: (multinode-327913) DBG | domain multinode-327913 has defined MAC address 52:54:00:39:8f:95 in network mk-multinode-327913
	I1117 16:19:11.421440   31346 main.go:141] libmachine: (multinode-327913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:8f:95", ip: ""} in network mk-multinode-327913: {Iface:virbr1 ExpiryTime:2023-11-17 17:16:05 +0000 UTC Type:0 Mac:52:54:00:39:8f:95 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-327913 Clientid:01:52:54:00:39:8f:95}
	I1117 16:19:11.421469   31346 main.go:141] libmachine: (multinode-327913) DBG | domain multinode-327913 has defined IP address 192.168.39.19 and MAC address 52:54:00:39:8f:95 in network mk-multinode-327913
	I1117 16:19:11.421663   31346 host.go:66] Checking if "multinode-327913" exists ...
	I1117 16:19:11.421972   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.422016   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.437368   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I1117 16:19:11.437813   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.438304   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.438325   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.438626   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.438848   31346 main.go:141] libmachine: (multinode-327913) Calling .DriverName
	I1117 16:19:11.439054   31346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:19:11.439081   31346 main.go:141] libmachine: (multinode-327913) Calling .GetSSHHostname
	I1117 16:19:11.441854   31346 main.go:141] libmachine: (multinode-327913) DBG | domain multinode-327913 has defined MAC address 52:54:00:39:8f:95 in network mk-multinode-327913
	I1117 16:19:11.442271   31346 main.go:141] libmachine: (multinode-327913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:8f:95", ip: ""} in network mk-multinode-327913: {Iface:virbr1 ExpiryTime:2023-11-17 17:16:05 +0000 UTC Type:0 Mac:52:54:00:39:8f:95 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-327913 Clientid:01:52:54:00:39:8f:95}
	I1117 16:19:11.442306   31346 main.go:141] libmachine: (multinode-327913) DBG | domain multinode-327913 has defined IP address 192.168.39.19 and MAC address 52:54:00:39:8f:95 in network mk-multinode-327913
	I1117 16:19:11.442488   31346 main.go:141] libmachine: (multinode-327913) Calling .GetSSHPort
	I1117 16:19:11.442698   31346 main.go:141] libmachine: (multinode-327913) Calling .GetSSHKeyPath
	I1117 16:19:11.442845   31346 main.go:141] libmachine: (multinode-327913) Calling .GetSSHUsername
	I1117 16:19:11.442984   31346 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/multinode-327913/id_rsa Username:docker}
	I1117 16:19:11.534628   31346 ssh_runner.go:195] Run: systemctl --version
	I1117 16:19:11.540730   31346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:19:11.555019   31346 kubeconfig.go:92] found "multinode-327913" server: "https://192.168.39.19:8443"
	I1117 16:19:11.555046   31346 api_server.go:166] Checking apiserver status ...
	I1117 16:19:11.555079   31346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 16:19:11.568636   31346 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	I1117 16:19:11.578163   31346 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod1241bd2c3476ce22dafd653e7cff259c/eaeb8011cc6cb965274051d1f69b1fbe8811d10d20ec56186736c350f39f80c8"
	I1117 16:19:11.578231   31346 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1241bd2c3476ce22dafd653e7cff259c/eaeb8011cc6cb965274051d1f69b1fbe8811d10d20ec56186736c350f39f80c8/freezer.state
	I1117 16:19:11.588019   31346 api_server.go:204] freezer state: "THAWED"
	I1117 16:19:11.588052   31346 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1117 16:19:11.593335   31346 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1117 16:19:11.593363   31346 status.go:421] multinode-327913 apiserver status = Running (err=<nil>)
	I1117 16:19:11.593374   31346 status.go:257] multinode-327913 status: &{Name:multinode-327913 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:19:11.593393   31346 status.go:255] checking status of multinode-327913-m02 ...
	I1117 16:19:11.593715   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.593758   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.608615   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I1117 16:19:11.608981   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.609447   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.609472   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.609757   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.609901   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetState
	I1117 16:19:11.611558   31346 status.go:330] multinode-327913-m02 host status = "Running" (err=<nil>)
	I1117 16:19:11.611579   31346 host.go:66] Checking if "multinode-327913-m02" exists ...
	I1117 16:19:11.611986   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.612034   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.626628   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1117 16:19:11.627004   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.627456   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.627478   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.627744   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.627984   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetIP
	I1117 16:19:11.630952   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | domain multinode-327913-m02 has defined MAC address 52:54:00:12:14:34 in network mk-multinode-327913
	I1117 16:19:11.631351   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:14:34", ip: ""} in network mk-multinode-327913: {Iface:virbr1 ExpiryTime:2023-11-17 17:17:42 +0000 UTC Type:0 Mac:52:54:00:12:14:34 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-327913-m02 Clientid:01:52:54:00:12:14:34}
	I1117 16:19:11.631393   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | domain multinode-327913-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:12:14:34 in network mk-multinode-327913
	I1117 16:19:11.631511   31346 host.go:66] Checking if "multinode-327913-m02" exists ...
	I1117 16:19:11.631800   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.631837   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.647018   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1117 16:19:11.647426   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.647908   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.647932   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.648236   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.648423   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .DriverName
	I1117 16:19:11.648671   31346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:19:11.648701   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetSSHHostname
	I1117 16:19:11.651612   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | domain multinode-327913-m02 has defined MAC address 52:54:00:12:14:34 in network mk-multinode-327913
	I1117 16:19:11.652052   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:14:34", ip: ""} in network mk-multinode-327913: {Iface:virbr1 ExpiryTime:2023-11-17 17:17:42 +0000 UTC Type:0 Mac:52:54:00:12:14:34 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-327913-m02 Clientid:01:52:54:00:12:14:34}
	I1117 16:19:11.652088   31346 main.go:141] libmachine: (multinode-327913-m02) DBG | domain multinode-327913-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:12:14:34 in network mk-multinode-327913
	I1117 16:19:11.652267   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetSSHPort
	I1117 16:19:11.652484   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetSSHKeyPath
	I1117 16:19:11.652661   31346 main.go:141] libmachine: (multinode-327913-m02) Calling .GetSSHUsername
	I1117 16:19:11.652765   31346 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9289/.minikube/machines/multinode-327913-m02/id_rsa Username:docker}
	I1117 16:19:11.746188   31346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:19:11.759913   31346 status.go:257] multinode-327913-m02 status: &{Name:multinode-327913-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:19:11.759958   31346 status.go:255] checking status of multinode-327913-m03 ...
	I1117 16:19:11.760326   31346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:19:11.760374   31346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:19:11.776382   31346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I1117 16:19:11.776956   31346 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:19:11.777469   31346 main.go:141] libmachine: Using API Version  1
	I1117 16:19:11.777500   31346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:19:11.777838   31346 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:19:11.778036   31346 main.go:141] libmachine: (multinode-327913-m03) Calling .GetState
	I1117 16:19:11.779704   31346 status.go:330] multinode-327913-m03 host status = "Stopped" (err=<nil>)
	I1117 16:19:11.779719   31346 status.go:343] host is not running, skipping remaining checks
	I1117 16:19:11.779724   31346 status.go:257] multinode-327913-m03 status: &{Name:multinode-327913-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-327913 node start m03 --alsologtostderr: (26.892045143s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (321.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327913
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-327913
E1117 16:19:57.877963   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:20:14.192066   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:20:25.565026   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:21:37.241394   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:22:07.622369   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-327913: (3m4.853036193s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327913 --wait=true -v=8 --alsologtostderr
E1117 16:24:57.878030   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327913 --wait=true -v=8 --alsologtostderr: (2m16.070682489s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327913
--- PASS: TestMultiNode/serial/RestartKeepsNodes (321.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-327913 node delete m03: (1.054352084s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 stop
E1117 16:25:14.191803   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:27:07.622342   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-327913 stop: (3m3.599418543s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327913 status: exit status 7 (103.16212ms)

                                                
                                                
-- stdout --
	multinode-327913
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327913-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr: exit status 7 (94.262142ms)

                                                
                                                
-- stdout --
	multinode-327913
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327913-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:28:05.792838   33510 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:28:05.792973   33510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:28:05.792979   33510 out.go:309] Setting ErrFile to fd 2...
	I1117 16:28:05.792983   33510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:28:05.793196   33510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:28:05.793368   33510 out.go:303] Setting JSON to false
	I1117 16:28:05.793393   33510 mustload.go:65] Loading cluster: multinode-327913
	I1117 16:28:05.793461   33510 notify.go:220] Checking for updates...
	I1117 16:28:05.793853   33510 config.go:182] Loaded profile config "multinode-327913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:28:05.793871   33510 status.go:255] checking status of multinode-327913 ...
	I1117 16:28:05.794421   33510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:28:05.794492   33510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:28:05.808588   33510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I1117 16:28:05.808975   33510 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:28:05.809579   33510 main.go:141] libmachine: Using API Version  1
	I1117 16:28:05.809605   33510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:28:05.809948   33510 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:28:05.810186   33510 main.go:141] libmachine: (multinode-327913) Calling .GetState
	I1117 16:28:05.811916   33510 status.go:330] multinode-327913 host status = "Stopped" (err=<nil>)
	I1117 16:28:05.811936   33510 status.go:343] host is not running, skipping remaining checks
	I1117 16:28:05.811944   33510 status.go:257] multinode-327913 status: &{Name:multinode-327913 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:28:05.811967   33510 status.go:255] checking status of multinode-327913-m02 ...
	I1117 16:28:05.812419   33510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1117 16:28:05.812489   33510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:28:05.826941   33510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36975
	I1117 16:28:05.827334   33510 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:28:05.827755   33510 main.go:141] libmachine: Using API Version  1
	I1117 16:28:05.827779   33510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:28:05.828092   33510 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:28:05.828334   33510 main.go:141] libmachine: (multinode-327913-m02) Calling .GetState
	I1117 16:28:05.829778   33510 status.go:330] multinode-327913-m02 host status = "Stopped" (err=<nil>)
	I1117 16:28:05.829792   33510 status.go:343] host is not running, skipping remaining checks
	I1117 16:28:05.829799   33510 status.go:257] multinode-327913-m02 status: &{Name:multinode-327913-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (99.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327913 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1117 16:28:30.667842   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327913 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m39.239964817s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327913 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (99.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (69.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327913
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327913-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-327913-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (83.189578ms)

                                                
                                                
-- stdout --
	* [multinode-327913-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-327913-m02' is duplicated with machine name 'multinode-327913-m02' in profile 'multinode-327913'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327913-m03 --driver=kvm2  --container-runtime=containerd
E1117 16:29:57.877995   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:30:14.191842   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327913-m03 --driver=kvm2  --container-runtime=containerd: (1m8.342592803s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-327913
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-327913: exit status 80 (247.055749ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-327913
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-327913-m03 already exists in multinode-327913-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-327913-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (69.59s)

                                                
                                    
x
+
TestPreload (246.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-327961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1117 16:31:20.925880   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:32:07.621449   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-327961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m30.840314292s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-327961 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-327961 image pull gcr.io/k8s-minikube/busybox: (1.102967277s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-327961
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-327961: (1m31.751100296s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-327961 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1117 16:34:57.878419   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-327961 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.030790383s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-327961 image list
helpers_test.go:175: Cleaning up "test-preload-327961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-327961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-327961: (1.106613339s)
--- PASS: TestPreload (246.08s)

                                                
                                    
x
+
TestScheduledStopUnix (139.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-653277 --memory=2048 --driver=kvm2  --container-runtime=containerd
E1117 16:35:14.191833   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-653277 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m7.147258291s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653277 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-653277 -n scheduled-stop-653277
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653277 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653277 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653277 -n scheduled-stop-653277
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-653277
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653277 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1117 16:37:07.619204   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-653277
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-653277: exit status 7 (76.330902ms)

                                                
                                                
-- stdout --
	scheduled-stop-653277
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653277 -n scheduled-stop-653277
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653277 -n scheduled-stop-653277: exit status 7 (83.149498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-653277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-653277
--- PASS: TestScheduledStopUnix (139.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1917406463.exe start -p running-upgrade-636534 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1917406463.exe start -p running-upgrade-636534 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m42.445917325s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-636534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-636534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m35.812112541s)
helpers_test.go:175: Cleaning up "running-upgrade-636534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-636534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-636534: (1.203700351s)
--- PASS: TestRunningBinaryUpgrade (200.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (205.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.262468974s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-480224
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-480224: (6.125903513s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-480224 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-480224 status --format={{.Host}}: exit status 7 (89.575539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.550695897s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-480224 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (193.376253ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480224] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-480224
	    minikube start -p kubernetes-upgrade-480224 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4802242 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-480224 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-480224 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.0424132s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-480224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-480224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-480224: (1.247484073s)
--- PASS: TestKubernetesUpgrade (205.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (105.759706ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-708530] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (151.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-708530 --driver=kvm2  --container-runtime=containerd
E1117 16:38:17.242092   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-708530 --driver=kvm2  --container-runtime=containerd: (2m31.192897823s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-708530 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (151.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1117 16:39:57.878346   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (15.667033683s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-708530 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-708530 status -o json: exit status 2 (280.836891ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-708530","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-708530
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-708530: (1.124513753s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1117 16:40:14.191469   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-708530 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.077866568s)
--- PASS: TestNoKubernetes/serial/Start (29.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-708530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-708530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.93255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-708530
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-708530: (1.268627321s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (74.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-708530 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-708530 --driver=kvm2  --container-runtime=containerd: (1m14.686252686s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (74.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-708530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-708530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (249.263331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (137.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.1866668206.exe start -p stopped-upgrade-154388 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.1866668206.exe start -p stopped-upgrade-154388 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m19.231147738s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.1866668206.exe -p stopped-upgrade-154388 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.1866668206.exe -p stopped-upgrade-154388 stop: (2.469067088s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-154388 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-154388 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.430213515s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (137.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-062072 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-062072 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (138.303665ms)

                                                
                                                
-- stdout --
	* [false-062072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:42:05.080097   40955 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:42:05.080307   40955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:42:05.080320   40955 out.go:309] Setting ErrFile to fd 2...
	I1117 16:42:05.080328   40955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:42:05.080673   40955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9289/.minikube/bin
	I1117 16:42:05.081580   40955 out.go:303] Setting JSON to false
	I1117 16:42:05.082950   40955 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5074,"bootTime":1700234251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:42:05.083021   40955 start.go:138] virtualization: kvm guest
	I1117 16:42:05.085822   40955 out.go:177] * [false-062072] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 16:42:05.087418   40955 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:42:05.089192   40955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:42:05.087390   40955 notify.go:220] Checking for updates...
	I1117 16:42:05.091000   40955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9289/kubeconfig
	I1117 16:42:05.092725   40955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9289/.minikube
	I1117 16:42:05.094328   40955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:42:05.096117   40955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:42:05.098279   40955 config.go:182] Loaded profile config "kubernetes-upgrade-480224": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1117 16:42:05.098409   40955 config.go:182] Loaded profile config "running-upgrade-636534": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1117 16:42:05.098507   40955 config.go:182] Loaded profile config "stopped-upgrade-154388": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I1117 16:42:05.098623   40955 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:42:05.134698   40955 out.go:177] * Using the kvm2 driver based on user configuration
	I1117 16:42:05.136311   40955 start.go:298] selected driver: kvm2
	I1117 16:42:05.136334   40955 start.go:902] validating driver "kvm2" against <nil>
	I1117 16:42:05.136346   40955 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:42:05.139029   40955 out.go:177] 
	W1117 16:42:05.140504   40955 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1117 16:42:05.141878   40955 out.go:177] 

                                                
                                                
** /stderr **
E1117 16:42:07.619624   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-062072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-062072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062072"

                                                
                                                
----------------------- debugLogs end: false-062072 [took: 3.61137853s] --------------------------------
helpers_test.go:175: Cleaning up "false-062072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-062072
--- PASS: TestNetworkPlugins/group/false (3.93s)

                                                
                                    
x
+
TestPause/serial/Start (158.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-389229 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-389229 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m38.864098921s)
--- PASS: TestPause/serial/Start (158.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (149.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m29.801997603s)
--- PASS: TestNetworkPlugins/group/auto/Start (149.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (126.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (2m6.312612852s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (126.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-154388
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-154388: (1.293759926s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (165.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E1117 16:44:57.878624   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:45:10.668593   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:45:14.191079   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m45.193994695s)
--- PASS: TestNetworkPlugins/group/calico/Start (165.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bql8k" [fe7d8198-7908-41d2-a199-1ad6c700776e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024766554s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9mmh2" [367d171b-749b-4d87-8962-e0b56e6427dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9mmh2" [367d171b-749b-4d87-8962-e0b56e6427dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.019649729s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-389229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-389229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.431540946s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jvrbl" [92549df5-149e-446d-9856-bd196c1e5b37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jvrbl" [92549df5-149e-446d-9856-bd196c1e5b37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.013577085s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-389229 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-389229 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-389229 --output=json --layout=cluster: exit status 2 (272.535614ms)

                                                
                                                
-- stdout --
	{"Name":"pause-389229","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-389229","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-389229 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-389229 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-389229 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-389229 --alsologtostderr -v=5: (1.176092513s)
--- PASS: TestPause/serial/DeletePaused (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m54.019420465s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (154.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m34.555192425s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (154.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (154.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m34.555387103s)
--- PASS: TestNetworkPlugins/group/flannel/Start (154.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-km78z" [5766c672-5f7f-4b8e-92fb-0f9c76be77c3] Running
E1117 16:47:07.619908   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.03063224s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-28m8p" [c1fedbb7-1c8c-45cf-ab15-4289c833222b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-28m8p" [c1fedbb7-1c8c-45cf-ab15-4289c833222b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.024502492s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (150.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E1117 16:48:00.926121   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-062072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m30.691696428s)
--- PASS: TestNetworkPlugins/group/bridge/Start (150.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mb55v" [cc99077c-bfb0-4278-af5b-85c551667cdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mb55v" [cc99077c-bfb0-4278-af5b-85c551667cdd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011209855s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-780140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-780140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m26.881452135s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9vnnn" [61bec5a0-98db-47bc-b3aa-7638d93b70b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9vnnn" [61bec5a0-98db-47bc-b3aa-7638d93b70b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.017283614s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9cj66" [faae62d1-5ce3-448c-a35e-002409c3cf60] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.041991077s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kfds9" [cbda8a6e-90ad-4288-81d9-05ebe7da74d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kfds9" [cbda8a6e-90ad-4288-81d9-05ebe7da74d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.018117073s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-540275 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1117 16:49:57.877915   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-540275 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m28.466252856s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (146.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-120096 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-120096 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (2m26.790294195s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (146.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-062072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-062072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kvzml" [8a1be3c3-1bd0-4f99-9517-93eb158cb2d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1117 16:50:14.191897   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kvzml" [8a1be3c3-1bd0-4f99-9517-93eb158cb2d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.015043943s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-062072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-062072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-974578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1117 16:51:11.373213   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.378507   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.388838   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.409158   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.449527   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.530063   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:11.690491   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:12.011117   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:12.651439   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:13.932612   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:16.492828   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-974578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m38.360824626s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-540275 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e59cc21a-f34d-4ac3-a69e-864ba8049938] Pending
E1117 16:51:18.957159   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:18.962482   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:18.972817   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:18.993132   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:19.033456   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:19.114357   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:19.274782   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:19.595802   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e59cc21a-f34d-4ac3-a69e-864ba8049938] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1117 16:51:20.235955   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:21.516178   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:21.613452   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e59cc21a-f34d-4ac3-a69e-864ba8049938] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.036467528s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-540275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-780140 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [094f6c4d-76ab-4139-b9cb-c6e7f746e14d] Pending
helpers_test.go:344: "busybox" [094f6c4d-76ab-4139-b9cb-c6e7f746e14d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1117 16:51:24.076929   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
helpers_test.go:344: "busybox" [094f6c4d-76ab-4139-b9cb-c6e7f746e14d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.05088038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-780140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-540275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-540275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.931208233s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-540275 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-540275 --alsologtostderr -v=3
E1117 16:51:29.197869   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-540275 --alsologtostderr -v=3: (1m32.011255277s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-780140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1117 16:51:31.854700   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-780140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.296914454s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-780140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-780140 --alsologtostderr -v=3
E1117 16:51:39.438578   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:51:52.334837   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:51:59.919343   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:52:04.351885   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.357253   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.367581   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.387913   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.428278   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.508665   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.669529   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:04.990472   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:05.631653   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:06.912176   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:07.619090   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:52:09.473091   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:52:14.594121   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-780140 --alsologtostderr -v=3: (1m32.636175863s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-974578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d116c93a-2181-4187-a3b2-e958c7b86825] Pending
helpers_test.go:344: "busybox" [d116c93a-2181-4187-a3b2-e958c7b86825] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d116c93a-2181-4187-a3b2-e958c7b86825] Running
E1117 16:52:24.834732   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.036036239s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-974578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-974578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-974578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.143940034s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-974578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-120096 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [091a24f0-9d79-4081-9e5e-32980ac09b42] Pending
helpers_test.go:344: "busybox" [091a24f0-9d79-4081-9e5e-32980ac09b42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [091a24f0-9d79-4081-9e5e-32980ac09b42] Running
E1117 16:52:33.295142   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.044174486s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-120096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-974578 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-974578 --alsologtostderr -v=3: (1m31.836266721s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-120096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-120096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.134800226s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-120096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-120096 --alsologtostderr -v=3
E1117 16:52:40.880413   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:52:45.315014   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-120096 --alsologtostderr -v=3: (1m31.859297524s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-540275 -n no-preload-540275
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-540275 -n no-preload-540275: exit status 7 (80.99103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-540275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-540275 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-540275 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m34.697994495s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-540275 -n no-preload-540275
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-780140 -n old-k8s-version-780140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-780140 -n old-k8s-version-780140: exit status 7 (85.15539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-780140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (466.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-780140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1117 16:53:25.090063   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.095415   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.105726   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.125836   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.166943   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.247707   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.408148   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:25.728866   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:26.275955   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:53:26.369573   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:27.650684   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:30.211520   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:35.331792   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:45.571966   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:53:55.216258   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-780140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m45.759888101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-780140 -n old-k8s-version-780140
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (466.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578: exit status 7 (84.940656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-974578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-974578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1117 16:54:02.800999   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:54:06.052216   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-974578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (6m0.463015518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-120096 -n embed-certs-120096
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-120096 -n embed-certs-120096: exit status 7 (100.117493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-120096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (354.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-120096 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1117 16:54:21.999214   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.004569   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.014865   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.036184   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.077134   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.157618   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.318595   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:22.639466   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:23.279683   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:24.461436   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.466698   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.477007   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.497335   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.537670   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.559804   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:24.618004   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:24.778416   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:25.098863   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:25.739843   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:27.020747   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:27.121046   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:29.581530   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:32.242080   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:34.702118   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:42.482355   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:54:44.942767   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:54:47.013020   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:54:48.196755   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:54:57.242722   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:54:57.878458   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
E1117 16:55:02.963503   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:55:05.423888   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:55:12.942577   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:12.947897   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:12.958237   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:12.978675   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:13.019640   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:13.100009   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:13.260492   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:13.581093   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:14.191881   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
E1117 16:55:14.222129   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:15.502438   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:18.062745   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:23.183579   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:33.423941   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:55:43.924357   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:55:46.385184   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:55:53.904684   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:56:08.934131   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:56:11.373415   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:56:18.957379   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:56:34.864992   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:56:39.057319   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/kindnet-062072/client.crt: no such file or directory
E1117 16:56:46.641668   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 16:57:04.352944   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:57:05.845089   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:57:07.619624   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/functional-857928/client.crt: no such file or directory
E1117 16:57:08.306168   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:57:32.037653   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/calico-062072/client.crt: no such file or directory
E1117 16:57:56.786038   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 16:58:25.091001   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-120096 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m53.941679031s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-120096 -n embed-certs-120096
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (354.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcvpk" [85b0caf2-3524-47e5-b51f-8df908b6420f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021123453s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcvpk" [85b0caf2-3524-47e5-b51f-8df908b6420f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013677304s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-540275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-540275 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-540275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-540275 -n no-preload-540275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-540275 -n no-preload-540275: exit status 2 (274.76351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-540275 -n no-preload-540275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-540275 -n no-preload-540275: exit status 2 (270.429237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-540275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-540275 -n no-preload-540275
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-540275 -n no-preload-540275
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (90.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-025178 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1117 16:58:52.774456   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/custom-flannel-062072/client.crt: no such file or directory
E1117 16:59:22.000248   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:59:24.460484   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:59:49.686121   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/enable-default-cni-062072/client.crt: no such file or directory
E1117 16:59:52.146661   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/flannel-062072/client.crt: no such file or directory
E1117 16:59:57.878174   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/ingress-addon-legacy-670547/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-025178 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m30.853525977s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (90.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6bvdq" [5c38d396-acb6-4679-af80-196b91c1763f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6bvdq" [5c38d396-acb6-4679-af80-196b91c1763f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.031302124s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qcpch" [6c25e204-363d-49dd-91d3-f30c722df04d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1117 17:00:12.942811   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/bridge-062072/client.crt: no such file or directory
E1117 17:00:14.191049   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/addons-875867/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qcpch" [6c25e204-363d-49dd-91d3-f30c722df04d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.313263523s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-025178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-025178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.603984345s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qcpch" [6c25e204-363d-49dd-91d3-f30c722df04d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014430835s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-120096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6bvdq" [5c38d396-acb6-4679-af80-196b91c1763f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015693211s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-974578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-025178 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-025178 --alsologtostderr -v=3: (7.132777961s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-120096 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-974578 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-120096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-120096 -n embed-certs-120096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-120096 -n embed-certs-120096: exit status 2 (329.949665ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-120096 -n embed-certs-120096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-120096 -n embed-certs-120096: exit status 2 (286.367382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-120096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-120096 -n embed-certs-120096
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-120096 -n embed-certs-120096
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-974578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578: exit status 2 (327.267768ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578: exit status 2 (286.604166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-974578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974578 -n default-k8s-diff-port-974578
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-025178 -n newest-cni-025178
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-025178 -n newest-cni-025178: exit status 7 (109.896384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-025178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-025178 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-025178 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (46.445352132s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-025178 -n newest-cni-025178
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8sxp6" [ed40894a-5f18-4b2b-be33-76e4ca3f526f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018634714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8sxp6" [ed40894a-5f18-4b2b-be33-76e4ca3f526f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012242705s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-780140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-780140 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-780140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-780140 -n old-k8s-version-780140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-780140 -n old-k8s-version-780140: exit status 2 (270.803445ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-780140 -n old-k8s-version-780140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-780140 -n old-k8s-version-780140: exit status 2 (267.679777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-780140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-780140 -n old-k8s-version-780140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-780140 -n old-k8s-version-780140
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-025178 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-025178 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-025178 -n newest-cni-025178
E1117 17:01:18.716928   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
E1117 17:01:18.722211   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
E1117 17:01:18.732471   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
E1117 17:01:18.752819   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
E1117 17:01:18.793185   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-025178 -n newest-cni-025178: exit status 2 (263.525149ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-025178 -n newest-cni-025178
E1117 17:01:18.873516   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
E1117 17:01:18.957072   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/auto-062072/client.crt: no such file or directory
E1117 17:01:19.034257   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-025178 -n newest-cni-025178: exit status 2 (269.133949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-025178 --alsologtostderr -v=1
E1117 17:01:19.354454   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-025178 -n newest-cni-025178
E1117 17:01:19.994805   16538 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9289/.minikube/profiles/no-preload-540275/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-025178 -n newest-cni-025178
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    

Test skip (36/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
244 TestNetworkPlugins/group/kubenet 3.75
252 TestNetworkPlugins/group/cilium 4.15
258 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-062072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-062072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062072"

                                                
                                                
----------------------- debugLogs end: kubenet-062072 [took: 3.575885775s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-062072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-062072
--- SKIP: TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-062072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-062072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-062072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-062072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062072"

                                                
                                                
----------------------- debugLogs end: cilium-062072 [took: 3.983207211s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-062072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-062072
--- SKIP: TestNetworkPlugins/group/cilium (4.15s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-231679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-231679
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard