Test Report: KVM_Linux_containerd 18517

                    
                      225d0002a402609a65399cabc142d90eb2090f83:2024-03-27:33764
                    
                

Test fail (1/333)

Order failed test Duration
45 TestAddons/parallel/Headlamp 2.76
x
+
TestAddons/parallel/Headlamp (2.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-295637 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-295637 --alsologtostderr -v=1: exit status 11 (331.096303ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 17:34:29.683853   15229 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:34:29.684172   15229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:34:29.684186   15229 out.go:304] Setting ErrFile to fd 2...
	I0327 17:34:29.684193   15229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:34:29.684456   15229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:34:29.684749   15229 mustload.go:65] Loading cluster: addons-295637
	I0327 17:34:29.685244   15229 config.go:182] Loaded profile config "addons-295637": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:34:29.685276   15229 addons.go:597] checking whether the cluster is paused
	I0327 17:34:29.685440   15229 config.go:182] Loaded profile config "addons-295637": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:34:29.685463   15229 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:34:29.685985   15229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:34:29.686042   15229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:34:29.700300   15229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0327 17:34:29.700855   15229 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:34:29.701551   15229 main.go:141] libmachine: Using API Version  1
	I0327 17:34:29.701570   15229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:34:29.702015   15229 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:34:29.702225   15229 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:34:29.703953   15229 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:34:29.704186   15229 ssh_runner.go:195] Run: systemctl --version
	I0327 17:34:29.704213   15229 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:34:29.706926   15229 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:34:29.707314   15229 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:34:29.707342   15229 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:34:29.707515   15229 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:34:29.707681   15229 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:34:29.707824   15229 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:34:29.707954   15229 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:34:29.801889   15229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 17:34:29.802036   15229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 17:34:29.888653   15229 cri.go:89] found id: "a15d5c012834ae40f327408c9647eccb8c99945f15badc1cf300a16061192451"
	I0327 17:34:29.888673   15229 cri.go:89] found id: "9563a19afd34cfca71c445ae88be8642a6d0a6ff78a49df0ee2ba9b416ccd104"
	I0327 17:34:29.888678   15229 cri.go:89] found id: "5efc18c471a715f559018a5ec520385fb1da779e864da87b18dd52e696df702c"
	I0327 17:34:29.888682   15229 cri.go:89] found id: "8c88f8819d1778ad76ece1d8951a48dd457fd29703bd4b74d3f2de59b3cb900e"
	I0327 17:34:29.888686   15229 cri.go:89] found id: "6de22672527185985dd402fd19e327ca24904ec5ea019ba69a413f727516eb35"
	I0327 17:34:29.888692   15229 cri.go:89] found id: "77929332c14ed160dcd7e40920edbee6f0d88544787918b8b2f2db971292d61c"
	I0327 17:34:29.888696   15229 cri.go:89] found id: "7c26ff2e6cde10e4a852e16330328701f6b00a2106f84591b215021fd24f20a1"
	I0327 17:34:29.888699   15229 cri.go:89] found id: "8879c693bd694be3e8d470f5b73758d96551ed255977e386e6e5fd22adae8f90"
	I0327 17:34:29.888703   15229 cri.go:89] found id: "42202fb96c899fc0d3c68c9f69e30ab93af1700282619aa3a6b2f32bc592d35c"
	I0327 17:34:29.888709   15229 cri.go:89] found id: "9d2cfc8cfbe6a9b7d92304372f2e1de120655fd6c5bb0af7a5947e2e3991a042"
	I0327 17:34:29.888721   15229 cri.go:89] found id: "550d1859d3f60432c1d9ed0c885c5e79260c376d42565c7dfbd6da9d87fe1bb5"
	I0327 17:34:29.888728   15229 cri.go:89] found id: "58f1256461d112e1779624f0913482d6981c46b1f9fe0f7fd23af9723b31d3d5"
	I0327 17:34:29.888733   15229 cri.go:89] found id: "768d306fb0c5939bfbcb9fd8db83dee095ca15bdb63c60e4ef1cc15e4aa037bc"
	I0327 17:34:29.888738   15229 cri.go:89] found id: "676caeeb07ae3e2f34670d381e878adfc551b6a0a124ed926cd05920c9f2584f"
	I0327 17:34:29.888749   15229 cri.go:89] found id: "fcaa7908b9040018e28440acb35af04868aa8ec786cbbfaa3e7e8741ce4d357b"
	I0327 17:34:29.888756   15229 cri.go:89] found id: "5435db6245823987deb678c16d997a8710cd12369963ce081d9533c181bf0f42"
	I0327 17:34:29.888761   15229 cri.go:89] found id: "ddd752818f4ab86096650eddd88354ea38dde3b854d04fe1ec666113cf9ee9b4"
	I0327 17:34:29.888771   15229 cri.go:89] found id: "930bbef1ecb135c98d625a6e25a520cd28af5d1e3d3c5700bf3aa70c6a96cea9"
	I0327 17:34:29.888777   15229 cri.go:89] found id: "c34798f011fbad0704982351bb2a022a7e81581cc80dff411f456121080b5409"
	I0327 17:34:29.888782   15229 cri.go:89] found id: ""
	I0327 17:34:29.888909   15229 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0327 17:34:29.949355   15229 main.go:141] libmachine: Making call to close driver server
	I0327 17:34:29.949374   15229 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:34:29.949728   15229 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:34:29.949730   15229 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:34:29.949771   15229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:34:29.952137   15229 out.go:177] 
	W0327 17:34:29.953500   15229 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T17:34:29Z" level=error msg="stat /run/containerd/runc/k8s.io/b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T17:34:29Z" level=error msg="stat /run/containerd/runc/k8s.io/b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe: no such file or directory"
	
	W0327 17:34:29.953522   15229 out.go:239] * 
	* 
	W0327 17:34:29.956001   15229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 17:34:29.957373   15229 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-295637 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-295637 -n addons-295637
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-295637 logs -n 25: (1.563019986s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-363016                                                                     | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| start   | -o=json --download-only                                                                     | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC |                     |
	|         | -p download-only-268880                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| delete  | -p download-only-268880                                                                     | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| start   | -o=json --download-only                                                                     | download-only-774033 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC |                     |
	|         | -p download-only-774033                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                                         |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| delete  | -p download-only-774033                                                                     | download-only-774033 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| delete  | -p download-only-363016                                                                     | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| delete  | -p download-only-268880                                                                     | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| delete  | -p download-only-774033                                                                     | download-only-774033 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-504745 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC |                     |
	|         | binary-mirror-504745                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:32787                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-504745                                                                     | binary-mirror-504745 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:31 UTC |
	| addons  | enable dashboard -p                                                                         | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC |                     |
	|         | addons-295637                                                                               |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC |                     |
	|         | addons-295637                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-295637 --wait=true                                                                | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:31 UTC | 27 Mar 24 17:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-295637 addons                                                                        | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:33 UTC | 27 Mar 24 17:34 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-295637 ssh cat                                                                       | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | /opt/local-path-provisioner/pvc-b157adfc-a620-496f-a31a-3bfb029d1256_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-295637 addons disable                                                                | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-295637 ip                                                                            | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	| addons  | addons-295637 addons disable                                                                | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | -p addons-295637                                                                            |                      |         |                |                     |                     |
	| addons  | addons-295637 addons disable                                                                | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | addons-295637                                                                               |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC | 27 Mar 24 17:34 UTC |
	|         | addons-295637                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-295637        | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:34 UTC |                     |
	|         | -p addons-295637                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 17:31:28
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 17:31:28.871041   13562 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:31:28.871294   13562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:31:28.871304   13562 out.go:304] Setting ErrFile to fd 2...
	I0327 17:31:28.871308   13562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:31:28.871486   13562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:31:28.872034   13562 out.go:298] Setting JSON to false
	I0327 17:31:28.872914   13562 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":823,"bootTime":1711559866,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:31:28.872970   13562 start.go:139] virtualization: kvm guest
	I0327 17:31:28.875240   13562 out.go:177] * [addons-295637] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 17:31:28.876788   13562 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 17:31:28.878233   13562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:31:28.876801   13562 notify.go:220] Checking for updates...
	I0327 17:31:28.880694   13562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:31:28.882021   13562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:31:28.883412   13562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 17:31:28.884665   13562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 17:31:28.885999   13562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:31:28.916708   13562 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 17:31:28.918133   13562 start.go:297] selected driver: kvm2
	I0327 17:31:28.918145   13562 start.go:901] validating driver "kvm2" against <nil>
	I0327 17:31:28.918156   13562 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 17:31:28.918824   13562 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:31:28.918882   13562 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18517-5351/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 17:31:28.933670   13562 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 17:31:28.933727   13562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 17:31:28.933927   13562 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 17:31:28.933985   13562 cni.go:84] Creating CNI manager for ""
	I0327 17:31:28.933997   13562 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:31:28.934003   13562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 17:31:28.934058   13562 start.go:340] cluster config:
	{Name:addons-295637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-295637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:31:28.934151   13562 iso.go:125] acquiring lock: {Name:mk44c6a96477688dc44b4b6d05c12d77dcc41cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:31:28.936187   13562 out.go:177] * Starting "addons-295637" primary control-plane node in "addons-295637" cluster
	I0327 17:31:28.937317   13562 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 17:31:28.937357   13562 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0327 17:31:28.937368   13562 cache.go:56] Caching tarball of preloaded images
	I0327 17:31:28.937464   13562 preload.go:173] Found /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0327 17:31:28.937478   13562 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0327 17:31:28.937768   13562 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/config.json ...
	I0327 17:31:28.937788   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/config.json: {Name:mkbf35291ce5a7cb20fd859685bdbf669ee4ad5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:31:28.937920   13562 start.go:360] acquireMachinesLock for addons-295637: {Name:mka30cf451fa6b46789de0283079584e44a83c82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 17:31:28.937963   13562 start.go:364] duration metric: took 29.907µs to acquireMachinesLock for "addons-295637"
	I0327 17:31:28.937979   13562 start.go:93] Provisioning new machine with config: &{Name:addons-295637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-
295637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 17:31:28.938039   13562 start.go:125] createHost starting for "" (driver="kvm2")
	I0327 17:31:28.939493   13562 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 17:31:28.939643   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:31:28.939681   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:31:28.953760   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0327 17:31:28.954189   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:31:28.954697   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:31:28.954717   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:31:28.955058   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:31:28.955254   13562 main.go:141] libmachine: (addons-295637) Calling .GetMachineName
	I0327 17:31:28.955396   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:28.955525   13562 start.go:159] libmachine.API.Create for "addons-295637" (driver="kvm2")
	I0327 17:31:28.955579   13562 client.go:168] LocalClient.Create starting
	I0327 17:31:28.955619   13562 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem
	I0327 17:31:29.195426   13562 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/cert.pem
	I0327 17:31:29.423594   13562 main.go:141] libmachine: Running pre-create checks...
	I0327 17:31:29.423614   13562 main.go:141] libmachine: (addons-295637) Calling .PreCreateCheck
	I0327 17:31:29.424085   13562 main.go:141] libmachine: (addons-295637) Calling .GetConfigRaw
	I0327 17:31:29.424530   13562 main.go:141] libmachine: Creating machine...
	I0327 17:31:29.424545   13562 main.go:141] libmachine: (addons-295637) Calling .Create
	I0327 17:31:29.424698   13562 main.go:141] libmachine: (addons-295637) Creating KVM machine...
	I0327 17:31:29.426000   13562 main.go:141] libmachine: (addons-295637) DBG | found existing default KVM network
	I0327 17:31:29.426875   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:29.426725   13584 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0327 17:31:29.426915   13562 main.go:141] libmachine: (addons-295637) DBG | created network xml: 
	I0327 17:31:29.426923   13562 main.go:141] libmachine: (addons-295637) DBG | <network>
	I0327 17:31:29.426932   13562 main.go:141] libmachine: (addons-295637) DBG |   <name>mk-addons-295637</name>
	I0327 17:31:29.426937   13562 main.go:141] libmachine: (addons-295637) DBG |   <dns enable='no'/>
	I0327 17:31:29.426944   13562 main.go:141] libmachine: (addons-295637) DBG |   
	I0327 17:31:29.426953   13562 main.go:141] libmachine: (addons-295637) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0327 17:31:29.426968   13562 main.go:141] libmachine: (addons-295637) DBG |     <dhcp>
	I0327 17:31:29.426973   13562 main.go:141] libmachine: (addons-295637) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0327 17:31:29.426978   13562 main.go:141] libmachine: (addons-295637) DBG |     </dhcp>
	I0327 17:31:29.426985   13562 main.go:141] libmachine: (addons-295637) DBG |   </ip>
	I0327 17:31:29.426990   13562 main.go:141] libmachine: (addons-295637) DBG |   
	I0327 17:31:29.426997   13562 main.go:141] libmachine: (addons-295637) DBG | </network>
	I0327 17:31:29.427002   13562 main.go:141] libmachine: (addons-295637) DBG | 
	I0327 17:31:29.432328   13562 main.go:141] libmachine: (addons-295637) DBG | trying to create private KVM network mk-addons-295637 192.168.39.0/24...
	I0327 17:31:29.491733   13562 main.go:141] libmachine: (addons-295637) DBG | private KVM network mk-addons-295637 192.168.39.0/24 created
	I0327 17:31:29.491777   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:29.491676   13584 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:31:29.491789   13562 main.go:141] libmachine: (addons-295637) Setting up store path in /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637 ...
	I0327 17:31:29.491812   13562 main.go:141] libmachine: (addons-295637) Building disk image from file:///home/jenkins/minikube-integration/18517-5351/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso
	I0327 17:31:29.491834   13562 main.go:141] libmachine: (addons-295637) Downloading /home/jenkins/minikube-integration/18517-5351/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18517-5351/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso...
	I0327 17:31:29.731341   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:29.731216   13584 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa...
	I0327 17:31:29.974865   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:29.974710   13584 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/addons-295637.rawdisk...
	I0327 17:31:29.974902   13562 main.go:141] libmachine: (addons-295637) DBG | Writing magic tar header
	I0327 17:31:29.974916   13562 main.go:141] libmachine: (addons-295637) DBG | Writing SSH key tar header
	I0327 17:31:29.974930   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:29.974825   13584 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637 ...
	I0327 17:31:29.974946   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637
	I0327 17:31:29.974958   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18517-5351/.minikube/machines
	I0327 17:31:29.974969   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:31:29.974977   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637 (perms=drwx------)
	I0327 17:31:29.974983   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18517-5351
	I0327 17:31:29.974989   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 17:31:29.974995   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home/jenkins
	I0327 17:31:29.975003   13562 main.go:141] libmachine: (addons-295637) DBG | Checking permissions on dir: /home
	I0327 17:31:29.975010   13562 main.go:141] libmachine: (addons-295637) DBG | Skipping /home - not owner
	I0327 17:31:29.975033   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins/minikube-integration/18517-5351/.minikube/machines (perms=drwxr-xr-x)
	I0327 17:31:29.975047   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins/minikube-integration/18517-5351/.minikube (perms=drwxr-xr-x)
	I0327 17:31:29.975079   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins/minikube-integration/18517-5351 (perms=drwxrwxr-x)
	I0327 17:31:29.975112   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 17:31:29.975123   13562 main.go:141] libmachine: (addons-295637) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 17:31:29.975131   13562 main.go:141] libmachine: (addons-295637) Creating domain...
	I0327 17:31:29.976090   13562 main.go:141] libmachine: (addons-295637) define libvirt domain using xml: 
	I0327 17:31:29.976139   13562 main.go:141] libmachine: (addons-295637) <domain type='kvm'>
	I0327 17:31:29.976157   13562 main.go:141] libmachine: (addons-295637)   <name>addons-295637</name>
	I0327 17:31:29.976167   13562 main.go:141] libmachine: (addons-295637)   <memory unit='MiB'>4000</memory>
	I0327 17:31:29.976187   13562 main.go:141] libmachine: (addons-295637)   <vcpu>2</vcpu>
	I0327 17:31:29.976201   13562 main.go:141] libmachine: (addons-295637)   <features>
	I0327 17:31:29.976207   13562 main.go:141] libmachine: (addons-295637)     <acpi/>
	I0327 17:31:29.976211   13562 main.go:141] libmachine: (addons-295637)     <apic/>
	I0327 17:31:29.976216   13562 main.go:141] libmachine: (addons-295637)     <pae/>
	I0327 17:31:29.976223   13562 main.go:141] libmachine: (addons-295637)     
	I0327 17:31:29.976229   13562 main.go:141] libmachine: (addons-295637)   </features>
	I0327 17:31:29.976237   13562 main.go:141] libmachine: (addons-295637)   <cpu mode='host-passthrough'>
	I0327 17:31:29.976252   13562 main.go:141] libmachine: (addons-295637)   
	I0327 17:31:29.976269   13562 main.go:141] libmachine: (addons-295637)   </cpu>
	I0327 17:31:29.976280   13562 main.go:141] libmachine: (addons-295637)   <os>
	I0327 17:31:29.976297   13562 main.go:141] libmachine: (addons-295637)     <type>hvm</type>
	I0327 17:31:29.976313   13562 main.go:141] libmachine: (addons-295637)     <boot dev='cdrom'/>
	I0327 17:31:29.976324   13562 main.go:141] libmachine: (addons-295637)     <boot dev='hd'/>
	I0327 17:31:29.976328   13562 main.go:141] libmachine: (addons-295637)     <bootmenu enable='no'/>
	I0327 17:31:29.976335   13562 main.go:141] libmachine: (addons-295637)   </os>
	I0327 17:31:29.976345   13562 main.go:141] libmachine: (addons-295637)   <devices>
	I0327 17:31:29.976353   13562 main.go:141] libmachine: (addons-295637)     <disk type='file' device='cdrom'>
	I0327 17:31:29.976361   13562 main.go:141] libmachine: (addons-295637)       <source file='/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/boot2docker.iso'/>
	I0327 17:31:29.976370   13562 main.go:141] libmachine: (addons-295637)       <target dev='hdc' bus='scsi'/>
	I0327 17:31:29.976374   13562 main.go:141] libmachine: (addons-295637)       <readonly/>
	I0327 17:31:29.976396   13562 main.go:141] libmachine: (addons-295637)     </disk>
	I0327 17:31:29.976417   13562 main.go:141] libmachine: (addons-295637)     <disk type='file' device='disk'>
	I0327 17:31:29.976435   13562 main.go:141] libmachine: (addons-295637)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 17:31:29.976452   13562 main.go:141] libmachine: (addons-295637)       <source file='/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/addons-295637.rawdisk'/>
	I0327 17:31:29.976468   13562 main.go:141] libmachine: (addons-295637)       <target dev='hda' bus='virtio'/>
	I0327 17:31:29.976481   13562 main.go:141] libmachine: (addons-295637)     </disk>
	I0327 17:31:29.976495   13562 main.go:141] libmachine: (addons-295637)     <interface type='network'>
	I0327 17:31:29.976510   13562 main.go:141] libmachine: (addons-295637)       <source network='mk-addons-295637'/>
	I0327 17:31:29.976520   13562 main.go:141] libmachine: (addons-295637)       <model type='virtio'/>
	I0327 17:31:29.976547   13562 main.go:141] libmachine: (addons-295637)     </interface>
	I0327 17:31:29.976561   13562 main.go:141] libmachine: (addons-295637)     <interface type='network'>
	I0327 17:31:29.976576   13562 main.go:141] libmachine: (addons-295637)       <source network='default'/>
	I0327 17:31:29.976595   13562 main.go:141] libmachine: (addons-295637)       <model type='virtio'/>
	I0327 17:31:29.976608   13562 main.go:141] libmachine: (addons-295637)     </interface>
	I0327 17:31:29.976617   13562 main.go:141] libmachine: (addons-295637)     <serial type='pty'>
	I0327 17:31:29.976631   13562 main.go:141] libmachine: (addons-295637)       <target port='0'/>
	I0327 17:31:29.976643   13562 main.go:141] libmachine: (addons-295637)     </serial>
	I0327 17:31:29.976657   13562 main.go:141] libmachine: (addons-295637)     <console type='pty'>
	I0327 17:31:29.976675   13562 main.go:141] libmachine: (addons-295637)       <target type='serial' port='0'/>
	I0327 17:31:29.976689   13562 main.go:141] libmachine: (addons-295637)     </console>
	I0327 17:31:29.976701   13562 main.go:141] libmachine: (addons-295637)     <rng model='virtio'>
	I0327 17:31:29.976713   13562 main.go:141] libmachine: (addons-295637)       <backend model='random'>/dev/random</backend>
	I0327 17:31:29.976726   13562 main.go:141] libmachine: (addons-295637)     </rng>
	I0327 17:31:29.976737   13562 main.go:141] libmachine: (addons-295637)     
	I0327 17:31:29.976754   13562 main.go:141] libmachine: (addons-295637)     
	I0327 17:31:29.976768   13562 main.go:141] libmachine: (addons-295637)   </devices>
	I0327 17:31:29.976778   13562 main.go:141] libmachine: (addons-295637) </domain>
	I0327 17:31:29.976790   13562 main.go:141] libmachine: (addons-295637) 
	I0327 17:31:29.982524   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:d6:18:6e in network default
	I0327 17:31:29.983065   13562 main.go:141] libmachine: (addons-295637) Ensuring networks are active...
	I0327 17:31:29.983082   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:29.983645   13562 main.go:141] libmachine: (addons-295637) Ensuring network default is active
	I0327 17:31:29.983908   13562 main.go:141] libmachine: (addons-295637) Ensuring network mk-addons-295637 is active
	I0327 17:31:29.984404   13562 main.go:141] libmachine: (addons-295637) Getting domain xml...
	I0327 17:31:29.984953   13562 main.go:141] libmachine: (addons-295637) Creating domain...
	I0327 17:31:31.321012   13562 main.go:141] libmachine: (addons-295637) Waiting to get IP...
	I0327 17:31:31.321745   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:31.322104   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:31.322134   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:31.322085   13584 retry.go:31] will retry after 195.074083ms: waiting for machine to come up
	I0327 17:31:31.518466   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:31.518818   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:31.518844   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:31.518783   13584 retry.go:31] will retry after 292.603296ms: waiting for machine to come up
	I0327 17:31:31.813300   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:31.813744   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:31.813773   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:31.813721   13584 retry.go:31] will retry after 426.847434ms: waiting for machine to come up
	I0327 17:31:32.242313   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:32.242645   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:32.242664   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:32.242598   13584 retry.go:31] will retry after 457.012299ms: waiting for machine to come up
	I0327 17:31:32.701330   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:32.701794   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:32.701835   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:32.701766   13584 retry.go:31] will retry after 465.131279ms: waiting for machine to come up
	I0327 17:31:33.168279   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:33.168709   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:33.168739   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:33.168655   13584 retry.go:31] will retry after 572.301994ms: waiting for machine to come up
	I0327 17:31:33.742421   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:33.742889   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:33.742919   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:33.742839   13584 retry.go:31] will retry after 1.136042636s: waiting for machine to come up
	I0327 17:31:34.880677   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:34.881084   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:34.881130   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:34.881063   13584 retry.go:31] will retry after 1.479588575s: waiting for machine to come up
	I0327 17:31:36.362667   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:36.363164   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:36.363188   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:36.363104   13584 retry.go:31] will retry after 1.408626006s: waiting for machine to come up
	I0327 17:31:37.772963   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:37.773467   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:37.773495   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:37.773416   13584 retry.go:31] will retry after 2.100268459s: waiting for machine to come up
	I0327 17:31:39.875393   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:39.875888   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:39.875910   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:39.875829   13584 retry.go:31] will retry after 2.896693714s: waiting for machine to come up
	I0327 17:31:42.774415   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:42.774790   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:42.774818   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:42.774736   13584 retry.go:31] will retry after 3.456698901s: waiting for machine to come up
	I0327 17:31:46.234632   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:46.235033   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:46.235062   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:46.234951   13584 retry.go:31] will retry after 2.807793579s: waiting for machine to come up
	I0327 17:31:49.044487   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:49.044837   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find current IP address of domain addons-295637 in network mk-addons-295637
	I0327 17:31:49.044862   13562 main.go:141] libmachine: (addons-295637) DBG | I0327 17:31:49.044791   13584 retry.go:31] will retry after 4.113515999s: waiting for machine to come up
	I0327 17:31:53.163199   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.163708   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has current primary IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.163737   13562 main.go:141] libmachine: (addons-295637) Found IP for machine: 192.168.39.182
	I0327 17:31:53.163750   13562 main.go:141] libmachine: (addons-295637) Reserving static IP address...
	I0327 17:31:53.164079   13562 main.go:141] libmachine: (addons-295637) DBG | unable to find host DHCP lease matching {name: "addons-295637", mac: "52:54:00:92:a0:84", ip: "192.168.39.182"} in network mk-addons-295637
	I0327 17:31:53.231151   13562 main.go:141] libmachine: (addons-295637) DBG | Getting to WaitForSSH function...
	I0327 17:31:53.231182   13562 main.go:141] libmachine: (addons-295637) Reserved static IP address: 192.168.39.182
	I0327 17:31:53.231195   13562 main.go:141] libmachine: (addons-295637) Waiting for SSH to be available...
	I0327 17:31:53.234009   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.234431   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.234475   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.234690   13562 main.go:141] libmachine: (addons-295637) DBG | Using SSH client type: external
	I0327 17:31:53.234736   13562 main.go:141] libmachine: (addons-295637) DBG | Using SSH private key: /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa (-rw-------)
	I0327 17:31:53.234767   13562 main.go:141] libmachine: (addons-295637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 17:31:53.234776   13562 main.go:141] libmachine: (addons-295637) DBG | About to run SSH command:
	I0327 17:31:53.234786   13562 main.go:141] libmachine: (addons-295637) DBG | exit 0
	I0327 17:31:53.365480   13562 main.go:141] libmachine: (addons-295637) DBG | SSH cmd err, output: <nil>: 
	I0327 17:31:53.365706   13562 main.go:141] libmachine: (addons-295637) KVM machine creation complete!
	I0327 17:31:53.365997   13562 main.go:141] libmachine: (addons-295637) Calling .GetConfigRaw
	I0327 17:31:53.366506   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:53.366720   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:53.366879   13562 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 17:31:53.366893   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:31:53.368377   13562 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 17:31:53.368389   13562 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 17:31:53.368395   13562 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 17:31:53.368401   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:53.370421   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.370768   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.370804   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.370902   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:53.371069   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.371232   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.371370   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:53.371545   13562 main.go:141] libmachine: Using SSH client type: native
	I0327 17:31:53.371728   13562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0327 17:31:53.371741   13562 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 17:31:53.481110   13562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 17:31:53.481141   13562 main.go:141] libmachine: Detecting the provisioner...
	I0327 17:31:53.481152   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:53.484045   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.484428   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.484455   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.484568   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:53.484741   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.484860   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.484962   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:53.485214   13562 main.go:141] libmachine: Using SSH client type: native
	I0327 17:31:53.485417   13562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0327 17:31:53.485444   13562 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 17:31:53.594507   13562 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 17:31:53.594577   13562 main.go:141] libmachine: found compatible host: buildroot
	I0327 17:31:53.594590   13562 main.go:141] libmachine: Provisioning with buildroot...
	I0327 17:31:53.594598   13562 main.go:141] libmachine: (addons-295637) Calling .GetMachineName
	I0327 17:31:53.594806   13562 buildroot.go:166] provisioning hostname "addons-295637"
	I0327 17:31:53.594841   13562 main.go:141] libmachine: (addons-295637) Calling .GetMachineName
	I0327 17:31:53.595023   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:53.597482   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.597913   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.597934   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.598073   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:53.598279   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.598458   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.598607   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:53.598773   13562 main.go:141] libmachine: Using SSH client type: native
	I0327 17:31:53.599019   13562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0327 17:31:53.599038   13562 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-295637 && echo "addons-295637" | sudo tee /etc/hostname
	I0327 17:31:53.721141   13562 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-295637
	
	I0327 17:31:53.721166   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:53.723837   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.724226   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.724255   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.724438   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:53.724601   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.724735   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:53.724885   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:53.725055   13562 main.go:141] libmachine: Using SSH client type: native
	I0327 17:31:53.725208   13562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0327 17:31:53.725224   13562 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-295637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-295637/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-295637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 17:31:53.838874   13562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 17:31:53.838907   13562 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18517-5351/.minikube CaCertPath:/home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18517-5351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18517-5351/.minikube}
	I0327 17:31:53.838949   13562 buildroot.go:174] setting up certificates
	I0327 17:31:53.838973   13562 provision.go:84] configureAuth start
	I0327 17:31:53.838992   13562 main.go:141] libmachine: (addons-295637) Calling .GetMachineName
	I0327 17:31:53.839269   13562 main.go:141] libmachine: (addons-295637) Calling .GetIP
	I0327 17:31:53.841847   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.842170   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.842190   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.842355   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:53.844506   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.844874   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:53.844900   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:53.844998   13562 provision.go:143] copyHostCerts
	I0327 17:31:53.845071   13562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18517-5351/.minikube/ca.pem (1082 bytes)
	I0327 17:31:53.845184   13562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18517-5351/.minikube/cert.pem (1123 bytes)
	I0327 17:31:53.845254   13562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18517-5351/.minikube/key.pem (1675 bytes)
	I0327 17:31:53.845320   13562 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18517-5351/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca-key.pem org=jenkins.addons-295637 san=[127.0.0.1 192.168.39.182 addons-295637 localhost minikube]
	I0327 17:31:54.062886   13562 provision.go:177] copyRemoteCerts
	I0327 17:31:54.062933   13562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 17:31:54.062959   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:54.065479   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.065751   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.065781   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.065911   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:54.066102   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.066290   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:54.066388   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:31:54.152895   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 17:31:54.179706   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 17:31:54.205711   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 17:31:54.231717   13562 provision.go:87] duration metric: took 392.729593ms to configureAuth
	I0327 17:31:54.231745   13562 buildroot.go:189] setting minikube options for container-runtime
	I0327 17:31:54.231899   13562 config.go:182] Loaded profile config "addons-295637": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:31:54.231922   13562 main.go:141] libmachine: Checking connection to Docker...
	I0327 17:31:54.231939   13562 main.go:141] libmachine: (addons-295637) Calling .GetURL
	I0327 17:31:54.232988   13562 main.go:141] libmachine: (addons-295637) DBG | Using libvirt version 6000000
	I0327 17:31:54.234874   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.235236   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.235276   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.235471   13562 main.go:141] libmachine: Docker is up and running!
	I0327 17:31:54.235484   13562 main.go:141] libmachine: Reticulating splines...
	I0327 17:31:54.235491   13562 client.go:171] duration metric: took 25.279902148s to LocalClient.Create
	I0327 17:31:54.235513   13562 start.go:167] duration metric: took 25.279991669s to libmachine.API.Create "addons-295637"
	I0327 17:31:54.235522   13562 start.go:293] postStartSetup for "addons-295637" (driver="kvm2")
	I0327 17:31:54.235534   13562 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 17:31:54.235549   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:54.235787   13562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 17:31:54.235819   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:54.238721   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.239150   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.239178   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.239289   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:54.239460   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.239609   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:54.239725   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:31:54.324298   13562 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 17:31:54.329310   13562 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 17:31:54.329332   13562 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-5351/.minikube/addons for local assets ...
	I0327 17:31:54.329403   13562 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-5351/.minikube/files for local assets ...
	I0327 17:31:54.329461   13562 start.go:296] duration metric: took 93.930103ms for postStartSetup
	I0327 17:31:54.329493   13562 main.go:141] libmachine: (addons-295637) Calling .GetConfigRaw
	I0327 17:31:54.330111   13562 main.go:141] libmachine: (addons-295637) Calling .GetIP
	I0327 17:31:54.332353   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.332629   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.332656   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.332862   13562 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/config.json ...
	I0327 17:31:54.333012   13562 start.go:128] duration metric: took 25.394965156s to createHost
	I0327 17:31:54.333032   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:54.335204   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.335521   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.335543   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.335684   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:54.335868   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.336008   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.336123   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:54.336269   13562 main.go:141] libmachine: Using SSH client type: native
	I0327 17:31:54.336424   13562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0327 17:31:54.336435   13562 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 17:31:54.442211   13562 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711560714.412749719
	
	I0327 17:31:54.442230   13562 fix.go:216] guest clock: 1711560714.412749719
	I0327 17:31:54.442239   13562 fix.go:229] Guest: 2024-03-27 17:31:54.412749719 +0000 UTC Remote: 2024-03-27 17:31:54.333023674 +0000 UTC m=+25.506958220 (delta=79.726045ms)
	I0327 17:31:54.442257   13562 fix.go:200] guest clock delta is within tolerance: 79.726045ms
	I0327 17:31:54.442262   13562 start.go:83] releasing machines lock for "addons-295637", held for 25.504290104s
	I0327 17:31:54.442290   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:54.442523   13562 main.go:141] libmachine: (addons-295637) Calling .GetIP
	I0327 17:31:54.444876   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.445176   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.445203   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.445300   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:54.445891   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:54.446054   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:31:54.446139   13562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 17:31:54.446186   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:54.446218   13562 ssh_runner.go:195] Run: cat /version.json
	I0327 17:31:54.446237   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:31:54.448593   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.448714   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.448998   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.449033   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.449062   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:54.449078   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:54.449138   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:54.449333   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.449347   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:31:54.449467   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:31:54.449528   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:54.449583   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:31:54.449653   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:31:54.449970   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:31:54.526505   13562 ssh_runner.go:195] Run: systemctl --version
	I0327 17:31:54.554169   13562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 17:31:54.560331   13562 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 17:31:54.560400   13562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 17:31:54.576863   13562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 17:31:54.576878   13562 start.go:494] detecting cgroup driver to use...
	I0327 17:31:54.576936   13562 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 17:31:54.611572   13562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 17:31:54.627154   13562 docker.go:217] disabling cri-docker service (if available) ...
	I0327 17:31:54.627195   13562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 17:31:54.643125   13562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 17:31:54.657979   13562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 17:31:54.772300   13562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 17:31:54.912528   13562 docker.go:233] disabling docker service ...
	I0327 17:31:54.912597   13562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 17:31:54.928283   13562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 17:31:54.942391   13562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 17:31:55.090863   13562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 17:31:55.213914   13562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 17:31:55.229656   13562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 17:31:55.250785   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 17:31:55.261318   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 17:31:55.271799   13562 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 17:31:55.271851   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 17:31:55.282281   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 17:31:55.292979   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 17:31:55.303710   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 17:31:55.314264   13562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 17:31:55.325211   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 17:31:55.336045   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 17:31:55.347068   13562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 17:31:55.358534   13562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 17:31:55.368708   13562 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 17:31:55.368755   13562 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 17:31:55.383419   13562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 17:31:55.393336   13562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 17:31:55.512613   13562 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 17:31:55.542464   13562 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0327 17:31:55.542558   13562 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 17:31:55.547294   13562 retry.go:31] will retry after 760.457401ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0327 17:31:56.308274   13562 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0327 17:31:56.314063   13562 start.go:562] Will wait 60s for crictl version
	I0327 17:31:56.314126   13562 ssh_runner.go:195] Run: which crictl
	I0327 17:31:56.318596   13562 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 17:31:56.356742   13562 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0327 17:31:56.356828   13562 ssh_runner.go:195] Run: containerd --version
	I0327 17:31:56.384901   13562 ssh_runner.go:195] Run: containerd --version
	I0327 17:31:56.418514   13562 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.14 ...
	I0327 17:31:56.420041   13562 main.go:141] libmachine: (addons-295637) Calling .GetIP
	I0327 17:31:56.422620   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:56.422892   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:31:56.422918   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:31:56.423094   13562 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 17:31:56.427640   13562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 17:31:56.440815   13562 kubeadm.go:877] updating cluster {Name:addons-295637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-295637 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 17:31:56.440945   13562 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 17:31:56.440996   13562 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 17:31:56.475081   13562 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0327 17:31:56.475141   13562 ssh_runner.go:195] Run: which lz4
	I0327 17:31:56.479199   13562 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 17:31:56.483976   13562 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 17:31:56.484003   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0327 17:31:57.960153   13562 containerd.go:563] duration metric: took 1.48099628s to copy over tarball
	I0327 17:31:57.960222   13562 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 17:32:00.421057   13562 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460803987s)
	I0327 17:32:00.421085   13562 containerd.go:570] duration metric: took 2.460907572s to extract the tarball
	I0327 17:32:00.421094   13562 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 17:32:00.461475   13562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 17:32:00.583231   13562 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 17:32:00.619695   13562 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 17:32:00.655173   13562 retry.go:31] will retry after 200.865782ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-27T17:32:00Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0327 17:32:00.856740   13562 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 17:32:00.897678   13562 containerd.go:627] all images are preloaded for containerd runtime.
	I0327 17:32:00.897700   13562 cache_images.go:84] Images are preloaded, skipping loading
	I0327 17:32:00.897707   13562 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.29.3 containerd true true} ...
	I0327 17:32:00.897817   13562 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-295637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-295637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 17:32:00.897881   13562 ssh_runner.go:195] Run: sudo crictl info
	I0327 17:32:00.933666   13562 cni.go:84] Creating CNI manager for ""
	I0327 17:32:00.933691   13562 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:32:00.933701   13562 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 17:32:00.933721   13562 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-295637 NodeName:addons-295637 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 17:32:00.933830   13562 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-295637"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 17:32:00.933886   13562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 17:32:00.945232   13562 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 17:32:00.945293   13562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 17:32:00.955990   13562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0327 17:32:00.973541   13562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 17:32:00.990779   13562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0327 17:32:01.008748   13562 ssh_runner.go:195] Run: grep 192.168.39.182	control-plane.minikube.internal$ /etc/hosts
	I0327 17:32:01.013676   13562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 17:32:01.027293   13562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 17:32:01.146101   13562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 17:32:01.168415   13562 certs.go:68] Setting up /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637 for IP: 192.168.39.182
	I0327 17:32:01.168434   13562 certs.go:194] generating shared ca certs ...
	I0327 17:32:01.168448   13562 certs.go:226] acquiring lock for ca certs: {Name:mka0b95bccc33d779dc4998dc5e0addbad80bf8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.176203   13562 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18517-5351/.minikube/ca.key
	I0327 17:32:01.251716   13562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-5351/.minikube/ca.crt ...
	I0327 17:32:01.251743   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/ca.crt: {Name:mkc6497b9e30010ff937ae27a5eaa993ffc2615d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.251916   13562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-5351/.minikube/ca.key ...
	I0327 17:32:01.251930   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/ca.key: {Name:mk16c4c9874fbb3440dd8effcfc2928a36316bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.252044   13562 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.key
	I0327 17:32:01.454505   13562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.crt ...
	I0327 17:32:01.454541   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.crt: {Name:mk5a5fac828128c1777ab0d0c1e0150b5c015965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.454724   13562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.key ...
	I0327 17:32:01.454741   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.key: {Name:mkde3bcb1de709596daef40088942e23400c423f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.454841   13562 certs.go:256] generating profile certs ...
	I0327 17:32:01.454913   13562 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.key
	I0327 17:32:01.454935   13562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt with IP's: []
	I0327 17:32:01.556413   13562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt ...
	I0327 17:32:01.556446   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: {Name:mk65989fbb18ee2038e91375e3399aa2233ca91e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.556582   13562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.key ...
	I0327 17:32:01.556592   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.key: {Name:mkf39a3d256dc0b684898e3f1b205ac08c98f670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.556673   13562 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key.92b2ed55
	I0327 17:32:01.556689   13562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt.92b2ed55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182]
	I0327 17:32:01.821773   13562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt.92b2ed55 ...
	I0327 17:32:01.821797   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt.92b2ed55: {Name:mkf99d79a0a6dc3a9d2e15407b5e23fafa1dccde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.821929   13562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key.92b2ed55 ...
	I0327 17:32:01.821945   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key.92b2ed55: {Name:mkd7ac31055c0d1cf93f264dad07ee4651f7ad1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.822012   13562 certs.go:381] copying /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt.92b2ed55 -> /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt
	I0327 17:32:01.822082   13562 certs.go:385] copying /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key.92b2ed55 -> /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key
	I0327 17:32:01.822129   13562 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.key
	I0327 17:32:01.822145   13562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.crt with IP's: []
	I0327 17:32:01.884200   13562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.crt ...
	I0327 17:32:01.884224   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.crt: {Name:mk063af589710fd7acc5e885bc64700f6f451282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.884357   13562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.key ...
	I0327 17:32:01.884367   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.key: {Name:mk79fda9ff6e04ef33a1e916ae4a56896d8bd1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:01.884517   13562 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 17:32:01.884548   13562 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/ca.pem (1082 bytes)
	I0327 17:32:01.884573   13562 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/cert.pem (1123 bytes)
	I0327 17:32:01.884596   13562 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-5351/.minikube/certs/key.pem (1675 bytes)
	I0327 17:32:01.885181   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 17:32:01.925176   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 17:32:01.950981   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 17:32:01.978412   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 17:32:02.005686   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 17:32:02.031398   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 17:32:02.057105   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 17:32:02.082283   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 17:32:02.107133   13562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-5351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 17:32:02.132017   13562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 17:32:02.150624   13562 ssh_runner.go:195] Run: openssl version
	I0327 17:32:02.156871   13562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 17:32:02.169169   13562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 17:32:02.174211   13562 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 17:32 /usr/share/ca-certificates/minikubeCA.pem
	I0327 17:32:02.174265   13562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 17:32:02.180160   13562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 17:32:02.192997   13562 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 17:32:02.197668   13562 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 17:32:02.197720   13562 kubeadm.go:391] StartCluster: {Name:addons-295637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-295637 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:32:02.197819   13562 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0327 17:32:02.197852   13562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 17:32:02.237954   13562 cri.go:89] found id: ""
	I0327 17:32:02.238024   13562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 17:32:02.250149   13562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 17:32:02.261732   13562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 17:32:02.273373   13562 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 17:32:02.273391   13562 kubeadm.go:156] found existing configuration files:
	
	I0327 17:32:02.273445   13562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 17:32:02.284349   13562 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 17:32:02.284394   13562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 17:32:02.295558   13562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 17:32:02.305710   13562 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 17:32:02.305746   13562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 17:32:02.316827   13562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 17:32:02.327536   13562 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 17:32:02.327582   13562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 17:32:02.338437   13562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 17:32:02.348480   13562 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 17:32:02.348518   13562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 17:32:02.358837   13562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 17:32:02.414371   13562 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 17:32:02.414470   13562 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 17:32:02.541646   13562 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 17:32:02.541797   13562 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 17:32:02.541923   13562 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 17:32:02.762320   13562 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 17:32:02.815771   13562 out.go:204]   - Generating certificates and keys ...
	I0327 17:32:02.815870   13562 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 17:32:02.815952   13562 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 17:32:03.013687   13562 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 17:32:03.283844   13562 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 17:32:03.433470   13562 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 17:32:03.808686   13562 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 17:32:03.953981   13562 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 17:32:03.954160   13562 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-295637 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0327 17:32:04.076177   13562 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 17:32:04.076540   13562 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-295637 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0327 17:32:04.373223   13562 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 17:32:04.548279   13562 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 17:32:04.700894   13562 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 17:32:04.701001   13562 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 17:32:05.092767   13562 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 17:32:05.194882   13562 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 17:32:05.346319   13562 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 17:32:05.455332   13562 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 17:32:05.733883   13562 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 17:32:05.734541   13562 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 17:32:05.736979   13562 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 17:32:05.753847   13562 out.go:204]   - Booting up control plane ...
	I0327 17:32:05.754011   13562 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 17:32:05.754119   13562 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 17:32:05.754207   13562 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 17:32:05.757904   13562 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 17:32:05.758894   13562 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 17:32:05.758980   13562 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 17:32:05.885444   13562 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 17:32:11.882636   13562 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002422 seconds
	I0327 17:32:11.902770   13562 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 17:32:11.930346   13562 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 17:32:12.462627   13562 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 17:32:12.462829   13562 kubeadm.go:309] [mark-control-plane] Marking the node addons-295637 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 17:32:12.976500   13562 kubeadm.go:309] [bootstrap-token] Using token: y0ek10.9pg8ome0wrtay2gn
	I0327 17:32:12.978899   13562 out.go:204]   - Configuring RBAC rules ...
	I0327 17:32:12.979014   13562 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 17:32:12.986175   13562 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 17:32:12.994191   13562 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 17:32:12.997369   13562 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 17:32:13.000748   13562 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 17:32:13.003857   13562 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 17:32:13.017797   13562 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 17:32:13.243829   13562 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 17:32:13.391028   13562 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 17:32:13.392138   13562 kubeadm.go:309] 
	I0327 17:32:13.392204   13562 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 17:32:13.392212   13562 kubeadm.go:309] 
	I0327 17:32:13.392336   13562 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 17:32:13.392359   13562 kubeadm.go:309] 
	I0327 17:32:13.392395   13562 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 17:32:13.392484   13562 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 17:32:13.392549   13562 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 17:32:13.392561   13562 kubeadm.go:309] 
	I0327 17:32:13.392641   13562 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 17:32:13.392658   13562 kubeadm.go:309] 
	I0327 17:32:13.392724   13562 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 17:32:13.392748   13562 kubeadm.go:309] 
	I0327 17:32:13.392830   13562 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 17:32:13.392944   13562 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 17:32:13.393044   13562 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 17:32:13.393055   13562 kubeadm.go:309] 
	I0327 17:32:13.393174   13562 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 17:32:13.393292   13562 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 17:32:13.393304   13562 kubeadm.go:309] 
	I0327 17:32:13.393429   13562 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token y0ek10.9pg8ome0wrtay2gn \
	I0327 17:32:13.393568   13562 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6bbf0d541a9225674917ee4a65fc13c3e77ded34e5646234f27e11f908d3656f \
	I0327 17:32:13.393601   13562 kubeadm.go:309] 	--control-plane 
	I0327 17:32:13.393608   13562 kubeadm.go:309] 
	I0327 17:32:13.393708   13562 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 17:32:13.393720   13562 kubeadm.go:309] 
	I0327 17:32:13.393818   13562 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token y0ek10.9pg8ome0wrtay2gn \
	I0327 17:32:13.393951   13562 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6bbf0d541a9225674917ee4a65fc13c3e77ded34e5646234f27e11f908d3656f 
	I0327 17:32:13.394681   13562 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 17:32:13.395190   13562 cni.go:84] Creating CNI manager for ""
	I0327 17:32:13.395208   13562 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:32:13.397003   13562 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 17:32:13.398456   13562 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 17:32:13.419854   13562 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 17:32:13.464203   13562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 17:32:13.464319   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:13.464345   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-295637 minikube.k8s.io/updated_at=2024_03_27T17_32_13_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=addons-295637 minikube.k8s.io/primary=true
	I0327 17:32:13.536037   13562 ops.go:34] apiserver oom_adj: -16
	I0327 17:32:13.624119   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:14.124515   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:14.625152   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:15.125242   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:15.625041   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:16.124402   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:16.624868   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:17.124566   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:17.624850   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:18.124849   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:18.624763   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:19.124209   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:19.624261   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:20.124940   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:20.625028   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:21.124491   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:21.624722   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:22.124463   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:22.625202   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:23.124538   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:23.624953   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:24.125144   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:24.624972   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:25.125056   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:25.625230   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:26.124481   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:26.624467   13562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 17:32:26.732836   13562 kubeadm.go:1107] duration metric: took 13.26857461s to wait for elevateKubeSystemPrivileges
	W0327 17:32:26.732871   13562 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 17:32:26.732881   13562 kubeadm.go:393] duration metric: took 24.535164605s to StartCluster
	I0327 17:32:26.732899   13562 settings.go:142] acquiring lock: {Name:mkbd19524b77748351e0114c126f2c4500ddd94c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:26.733009   13562 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:32:26.733381   13562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/kubeconfig: {Name:mked1759971a51875d27f9aea742c21997e1c5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:32:26.733606   13562 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0327 17:32:26.735356   13562 out.go:177] * Verifying Kubernetes components...
	I0327 17:32:26.733612   13562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 17:32:26.733653   13562 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 17:32:26.733826   13562 config.go:182] Loaded profile config "addons-295637": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:32:26.736770   13562 addons.go:69] Setting yakd=true in profile "addons-295637"
	I0327 17:32:26.736784   13562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 17:32:26.736791   13562 addons.go:69] Setting default-storageclass=true in profile "addons-295637"
	I0327 17:32:26.736803   13562 addons.go:234] Setting addon yakd=true in "addons-295637"
	I0327 17:32:26.736810   13562 addons.go:69] Setting ingress-dns=true in profile "addons-295637"
	I0327 17:32:26.736831   13562 addons.go:69] Setting gcp-auth=true in profile "addons-295637"
	I0327 17:32:26.736836   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.736843   13562 addons.go:69] Setting cloud-spanner=true in profile "addons-295637"
	I0327 17:32:26.736859   13562 addons.go:69] Setting ingress=true in profile "addons-295637"
	I0327 17:32:26.736859   13562 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-295637"
	I0327 17:32:26.736869   13562 addons.go:69] Setting inspektor-gadget=true in profile "addons-295637"
	I0327 17:32:26.736848   13562 addons.go:234] Setting addon ingress-dns=true in "addons-295637"
	I0327 17:32:26.736881   13562 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-295637"
	I0327 17:32:26.736886   13562 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-295637"
	I0327 17:32:26.736896   13562 addons.go:234] Setting addon inspektor-gadget=true in "addons-295637"
	I0327 17:32:26.736901   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.736906   13562 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-295637"
	I0327 17:32:26.736922   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.736940   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737092   13562 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-295637"
	I0327 17:32:26.737151   13562 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-295637"
	I0327 17:32:26.737179   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.736836   13562 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-295637"
	I0327 17:32:26.736874   13562 addons.go:234] Setting addon cloud-spanner=true in "addons-295637"
	I0327 17:32:26.736822   13562 addons.go:69] Setting helm-tiller=true in profile "addons-295637"
	I0327 17:32:26.737305   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737313   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737326   13562 addons.go:234] Setting addon helm-tiller=true in "addons-295637"
	I0327 17:32:26.737330   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737342   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737353   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737377   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737538   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737567   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737625   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737634   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.736849   13562 mustload.go:65] Loading cluster: addons-295637
	I0327 17:32:26.737652   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737761   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737797   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.737816   13562 addons.go:69] Setting volumesnapshots=true in profile "addons-295637"
	I0327 17:32:26.736848   13562 addons.go:69] Setting registry=true in profile "addons-295637"
	I0327 17:32:26.737842   13562 addons.go:234] Setting addon volumesnapshots=true in "addons-295637"
	I0327 17:32:26.737860   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737879   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.736859   13562 addons.go:69] Setting metrics-server=true in profile "addons-295637"
	I0327 17:32:26.737987   13562 addons.go:234] Setting addon metrics-server=true in "addons-295637"
	I0327 17:32:26.738013   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737802   13562 config.go:182] Loaded profile config "addons-295637": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:32:26.737880   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.738143   13562 addons.go:69] Setting storage-provisioner=true in profile "addons-295637"
	I0327 17:32:26.738185   13562 addons.go:234] Setting addon storage-provisioner=true in "addons-295637"
	I0327 17:32:26.738234   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737817   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.738320   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.738359   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.738393   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.738398   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.738424   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737288   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.736876   13562 addons.go:234] Setting addon ingress=true in "addons-295637"
	I0327 17:32:26.739801   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.739821   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.737868   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.741031   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.741069   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.737870   13562 addons.go:234] Setting addon registry=true in "addons-295637"
	I0327 17:32:26.745738   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.746110   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.746128   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.758594   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0327 17:32:26.759170   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.759477   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0327 17:32:26.759733   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.759744   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.759800   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0327 17:32:26.759910   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.760438   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.760462   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.760573   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.760584   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.760956   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.760974   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.761019   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.761043   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.761242   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.761305   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.761511   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.761837   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.761880   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.762261   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0327 17:32:26.765808   13562 addons.go:234] Setting addon default-storageclass=true in "addons-295637"
	I0327 17:32:26.765848   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.766215   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.766237   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.769674   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0327 17:32:26.769812   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.769863   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.770229   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.770322   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.770463   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0327 17:32:26.770646   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.770678   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.770908   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.771071   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.771081   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.771213   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.771223   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.771930   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.771946   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.772009   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.772053   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43227
	I0327 17:32:26.772495   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.772520   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.772892   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.772971   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.773048   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.773271   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.773493   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.773509   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.773911   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.773936   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.774425   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.774935   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.774980   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.775507   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0327 17:32:26.775920   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.776452   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.776475   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.777497   13562 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-295637"
	I0327 17:32:26.777550   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.777895   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.777923   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.778618   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.779194   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.779251   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.810689   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I0327 17:32:26.810857   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0327 17:32:26.811288   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.811496   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0327 17:32:26.811652   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.811819   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.811844   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.812058   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.812081   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.812147   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.812187   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
	I0327 17:32:26.812218   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.812500   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.812507   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.812754   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.812836   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.812852   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.813643   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.814130   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35085
	I0327 17:32:26.814183   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.814231   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.814264   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0327 17:32:26.814430   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.814443   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0327 17:32:26.814553   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.814557   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.815201   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.815220   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.815205   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.815240   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.815250   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.815334   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.815358   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.815666   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.815690   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.815800   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.816031   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.816100   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.816140   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.816646   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.816680   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.816884   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0327 17:32:26.816893   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.817275   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.817290   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.817306   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.819387   13562 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 17:32:26.818054   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.818166   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.818576   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.819261   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.820763   13562 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 17:32:26.820775   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 17:32:26.820793   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.820840   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.822272   13562 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 17:32:26.821469   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.821850   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.823464   13562 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 17:32:26.823496   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 17:32:26.823513   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.823554   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 17:32:26.823798   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.824938   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 17:32:26.825887   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.826852   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 17:32:26.826884   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.826021   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.827558   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.828064   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.828256   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 17:32:26.828594   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.828613   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.828851   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.829616   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.831409   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0327 17:32:26.831464   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 17:32:26.831704   13562 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 17:32:26.831729   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.831496   13562 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0327 17:32:26.832003   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.832034   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.833518   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.834038   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0327 17:32:26.834615   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 17:32:26.834721   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.834972   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.835018   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.835512   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.836505   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.836530   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.836600   13562 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 17:32:26.836610   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 17:32:26.836626   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.836344   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 17:32:26.836461   13562 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0327 17:32:26.837110   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.837158   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.837331   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.838297   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.838355   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 17:32:26.840839   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 17:32:26.840868   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 17:32:26.840883   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.839266   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.839501   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0327 17:32:26.840953   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.840193   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.841004   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.842216   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.842293   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0327 17:32:26.842320   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0327 17:32:26.842705   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.842711   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.843405   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.843421   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.844067   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.844476   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.844866   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0327 17:32:26.844939   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.844953   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.845037   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0327 17:32:26.845075   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.845245   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.845657   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.845696   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.845874   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.845943   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.846349   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.846378   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.846620   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.846621   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.846633   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.846853   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.846885   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.846911   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0327 17:32:26.847004   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.847182   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.847193   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.847251   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.847313   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:26.847661   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.847703   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.847913   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.847981   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.847995   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.848022   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.848089   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.848401   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.848434   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.848468   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.848506   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.848658   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.848671   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.848714   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.848831   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.848940   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.849646   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.850056   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0327 17:32:26.850086   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.850113   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.850195   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.850451   13562 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 17:32:26.850469   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 17:32:26.850486   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.850500   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.850555   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.850700   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.851599   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.851617   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.852256   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.852481   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.853550   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.855913   13562 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 17:32:26.854007   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.854549   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.854767   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.855752   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.857073   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.858208   13562 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 17:32:26.857210   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.857332   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.857658   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.859287   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.861401   13562 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 17:32:26.859448   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.860278   13562 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 17:32:26.860667   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.862539   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.862999   13562 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 17:32:26.863013   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 17:32:26.863029   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.864722   13562 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 17:32:26.864737   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 17:32:26.864754   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.863159   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.863774   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0327 17:32:26.865719   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.866164   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.866179   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.866757   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.867299   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:26.867333   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:26.868162   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.868285   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0327 17:32:26.868603   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33801
	I0327 17:32:26.868748   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.869058   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.869076   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.869117   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.869202   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.869348   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.869362   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.869370   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.869538   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.869674   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.869814   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.869931   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.870149   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.870369   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.870391   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.870531   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.870543   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.871034   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.871104   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.871314   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.871361   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.871644   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.871832   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.872096   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.874171   13562 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 17:32:26.872793   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.875784   13562 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 17:32:26.875801   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 17:32:26.875820   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.877329   13562 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 17:32:26.878932   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 17:32:26.877256   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0327 17:32:26.879021   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 17:32:26.879047   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.879961   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.880213   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.880251   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I0327 17:32:26.880667   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.880780   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.880805   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.881015   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.881052   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.881176   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.881190   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.881411   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.881615   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.881673   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.881861   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.881967   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.882227   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.882378   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.882514   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.883609   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.883661   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.883687   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.883702   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.883705   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.885096   13562 out.go:177]   - Using image docker.io/busybox:stable
	I0327 17:32:26.883857   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.885334   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0327 17:32:26.887519   13562 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 17:32:26.886407   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.886532   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.888747   13562 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 17:32:26.888762   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 17:32:26.888774   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.888915   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.889262   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.889280   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.889597   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.889848   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.891731   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.893417   13562 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 17:32:26.892863   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.893348   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.894748   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.894711   13562 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 17:32:26.894774   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.894776   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 17:32:26.894795   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.895334   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.895462   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.895544   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.898120   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.898532   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.898576   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.898816   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.898989   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.899143   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.899274   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.899336   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0327 17:32:26.899675   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.899794   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0327 17:32:26.900130   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.900143   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.900157   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:26.900523   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:26.900535   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:26.900547   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.900704   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.900838   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:26.900998   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:26.902193   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.903882   13562 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 17:32:26.902489   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:26.906164   13562 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 17:32:26.907383   13562 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 17:32:26.907397   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 17:32:26.908616   13562 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 17:32:26.907415   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.909871   13562 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 17:32:26.909885   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 17:32:26.909899   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:26.912987   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.913274   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.913290   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.913542   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.913620   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.913692   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.913785   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.913985   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:26.914241   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:26.914259   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:26.914280   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:26.914420   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:26.914606   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:26.914744   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:27.038545   13562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 17:32:27.365145   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 17:32:27.381670   13562 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 17:32:27.381694   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 17:32:27.425390   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 17:32:27.468700   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 17:32:27.477156   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 17:32:27.501634   13562 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 17:32:27.501666   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 17:32:27.525517   13562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 17:32:27.546540   13562 node_ready.go:35] waiting up to 6m0s for node "addons-295637" to be "Ready" ...
	I0327 17:32:27.550071   13562 node_ready.go:49] node "addons-295637" has status "Ready":"True"
	I0327 17:32:27.550100   13562 node_ready.go:38] duration metric: took 3.520312ms for node "addons-295637" to be "Ready" ...
	I0327 17:32:27.550112   13562 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 17:32:27.559637   13562 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-tzq24" in "kube-system" namespace to be "Ready" ...
	I0327 17:32:27.601294   13562 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 17:32:27.601318   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 17:32:27.612151   13562 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 17:32:27.612182   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 17:32:27.627199   13562 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 17:32:27.627220   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 17:32:27.733330   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 17:32:27.788402   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 17:32:27.859755   13562 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 17:32:27.859778   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 17:32:27.890421   13562 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 17:32:27.890443   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 17:32:27.909556   13562 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 17:32:27.909585   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 17:32:27.932487   13562 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0327 17:32:27.932509   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0327 17:32:27.954405   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 17:32:27.954424   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 17:32:28.072845   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 17:32:28.084055   13562 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 17:32:28.084074   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 17:32:28.117150   13562 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 17:32:28.117171   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 17:32:28.567036   13562 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 17:32:28.567068   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 17:32:28.608473   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 17:32:28.617930   13562 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 17:32:28.617951   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0327 17:32:28.619802   13562 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 17:32:28.619821   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 17:32:28.622720   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 17:32:28.622738   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 17:32:28.733278   13562 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 17:32:28.733307   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 17:32:28.790872   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 17:32:28.794432   13562 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 17:32:28.794452   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 17:32:28.841224   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 17:32:28.851061   13562 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 17:32:28.851088   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 17:32:28.890833   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 17:32:28.890858   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 17:32:29.049739   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 17:32:29.101414   13562 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 17:32:29.101497   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 17:32:29.111363   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 17:32:29.111391   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 17:32:29.280367   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 17:32:29.280389   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 17:32:29.396621   13562 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 17:32:29.396657   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 17:32:29.407695   13562 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 17:32:29.407714   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 17:32:29.592314   13562 pod_ready.go:102] pod "coredns-76f75df574-tzq24" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:29.608887   13562 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 17:32:29.608908   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 17:32:29.699185   13562 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 17:32:29.699206   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 17:32:29.787406   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 17:32:29.819041   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 17:32:29.819066   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 17:32:29.831514   13562 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 17:32:29.831534   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 17:32:29.958426   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 17:32:29.958428   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 17:32:29.958488   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 17:32:30.193664   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 17:32:30.193686   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 17:32:30.428915   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 17:32:30.428939   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 17:32:30.606331   13562 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 17:32:30.606361   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 17:32:30.868760   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 17:32:31.599105   13562 pod_ready.go:102] pod "coredns-76f75df574-tzq24" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:33.489584   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.124407627s)
	I0327 17:32:33.489628   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.489642   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.489687   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.064257659s)
	I0327 17:32:33.489735   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.489749   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.021024948s)
	I0327 17:32:33.489754   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.489767   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.489775   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.489828   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.012634779s)
	I0327 17:32:33.489858   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.489829   13562 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.964285566s)
	I0327 17:32:33.489869   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.489878   13562 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0327 17:32:33.490144   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.490154   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.490174   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.490173   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.490182   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.490187   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.490204   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.490210   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.490213   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.490214   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.490221   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.490191   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.490233   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.490240   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.490192   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.490286   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.490224   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.490320   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.490327   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.490400   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.490550   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.490577   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.490584   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.490985   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.491001   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.491328   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.491340   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.492116   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.492125   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.492137   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.610879   13562 pod_ready.go:102] pod "coredns-76f75df574-tzq24" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:33.635838   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:33.635859   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:33.636128   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:33.636192   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:33.636139   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:33.710609   13562 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 17:32:33.710653   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:33.713590   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:33.714027   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:33.714055   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:33.714207   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:33.714409   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:33.714585   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:33.714734   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:34.013382   13562 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-295637" context rescaled to 1 replicas
	I0327 17:32:34.384976   13562 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 17:32:34.557841   13562 addons.go:234] Setting addon gcp-auth=true in "addons-295637"
	I0327 17:32:34.557897   13562 host.go:66] Checking if "addons-295637" exists ...
	I0327 17:32:34.558207   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:34.558243   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:34.573393   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0327 17:32:34.573846   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:34.574404   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:34.574428   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:34.574714   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:34.575280   13562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:32:34.575310   13562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:32:34.589462   13562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I0327 17:32:34.589872   13562 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:32:34.590249   13562 main.go:141] libmachine: Using API Version  1
	I0327 17:32:34.590274   13562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:32:34.590550   13562 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:32:34.590761   13562 main.go:141] libmachine: (addons-295637) Calling .GetState
	I0327 17:32:34.592296   13562 main.go:141] libmachine: (addons-295637) Calling .DriverName
	I0327 17:32:34.592521   13562 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 17:32:34.592548   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHHostname
	I0327 17:32:34.595304   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:34.595689   13562 main.go:141] libmachine: (addons-295637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:a0:84", ip: ""} in network mk-addons-295637: {Iface:virbr1 ExpiryTime:2024-03-27 18:31:45 +0000 UTC Type:0 Mac:52:54:00:92:a0:84 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-295637 Clientid:01:52:54:00:92:a0:84}
	I0327 17:32:34.595722   13562 main.go:141] libmachine: (addons-295637) DBG | domain addons-295637 has defined IP address 192.168.39.182 and MAC address 52:54:00:92:a0:84 in network mk-addons-295637
	I0327 17:32:34.595827   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHPort
	I0327 17:32:34.596017   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHKeyPath
	I0327 17:32:34.596149   13562 main.go:141] libmachine: (addons-295637) Calling .GetSSHUsername
	I0327 17:32:34.596298   13562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/addons-295637/id_rsa Username:docker}
	I0327 17:32:36.146716   13562 pod_ready.go:102] pod "coredns-76f75df574-tzq24" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:36.233273   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.44482705s)
	I0327 17:32:36.233322   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233319   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.160445304s)
	I0327 17:32:36.233334   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233358   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233373   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233379   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.624875883s)
	I0327 17:32:36.233452   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233477   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233488   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.442584969s)
	I0327 17:32:36.233510   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233524   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233572   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.392316946s)
	I0327 17:32:36.233598   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233608   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233626   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.183858865s)
	I0327 17:32:36.233643   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233652   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233753   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.446314361s)
	W0327 17:32:36.233781   13562 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 17:32:36.233799   13562 retry.go:31] will retry after 244.977471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 17:32:36.233874   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.275412922s)
	I0327 17:32:36.233893   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.233902   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.233961   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.233994   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234002   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234003   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234011   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234017   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234019   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234036   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234044   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234060   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234078   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234085   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234091   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234098   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234134   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234143   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234152   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234158   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234191   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234209   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234215   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234222   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234228   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234274   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234292   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234299   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234306   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234313   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234530   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234562   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234590   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.234608   13562 addons.go:470] Verifying addon metrics-server=true in "addons-295637"
	I0327 17:32:36.234699   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234866   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.501439946s)
	I0327 17:32:36.234896   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.234905   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.234958   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.234983   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.234990   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.235369   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.235387   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.235395   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.235402   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.235457   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.235545   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.235553   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.235561   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.235568   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.235611   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.235631   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.235638   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.235646   13562 addons.go:470] Verifying addon registry=true in "addons-295637"
	I0327 17:32:36.239397   13562 out.go:177] * Verifying registry addon...
	I0327 17:32:36.235766   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.235796   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.239435   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.239446   13562 addons.go:470] Verifying addon ingress=true in "addons-295637"
	I0327 17:32:36.235821   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.235843   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.235961   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.237056   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.237079   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.237996   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.238049   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.239572   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.240788   13562 out.go:177] * Verifying ingress addon...
	I0327 17:32:36.240808   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.240828   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.240838   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.243592   13562 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-295637 service yakd-dashboard -n yakd-dashboard
	
	I0327 17:32:36.243170   13562 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 17:32:36.244280   13562 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 17:32:36.251830   13562 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 17:32:36.251868   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:36.257636   13562 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 17:32:36.257653   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:36.265851   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:36.265866   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:36.266126   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:36.266158   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:36.266172   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:36.479110   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 17:32:36.749731   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:36.758825   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:37.384306   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:37.384458   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:37.809401   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:37.819345   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:37.858428   13562 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.265876233s)
	I0327 17:32:37.860484   13562 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 17:32:37.858817   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.990000344s)
	I0327 17:32:37.862142   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:37.862161   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:37.863576   13562 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 17:32:37.862469   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:37.862492   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:37.864999   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:37.865030   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:37.865042   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:37.865039   13562 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 17:32:37.865116   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 17:32:37.865329   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:37.865335   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:37.865351   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:37.865367   13562 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-295637"
	I0327 17:32:37.866738   13562 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 17:32:37.868539   13562 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 17:32:37.930034   13562 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 17:32:37.930062   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 17:32:37.969832   13562 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 17:32:37.969852   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:37.998039   13562 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 17:32:37.998063   13562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 17:32:38.096811   13562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 17:32:38.251281   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:38.258916   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:38.376991   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:38.568246   13562 pod_ready.go:102] pod "coredns-76f75df574-tzq24" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:38.750001   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:38.750814   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:38.874924   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:39.096934   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.617762474s)
	I0327 17:32:39.096992   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:39.097006   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:39.097278   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:39.097300   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:39.097351   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:39.097363   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:39.097381   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:39.097611   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:39.097629   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:39.097647   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:39.252576   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:39.252935   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:39.376397   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:39.591876   13562 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.495021452s)
	I0327 17:32:39.591926   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:39.591940   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:39.592247   13562 main.go:141] libmachine: (addons-295637) DBG | Closing plugin on server side
	I0327 17:32:39.592286   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:39.592297   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:39.592310   13562 main.go:141] libmachine: Making call to close driver server
	I0327 17:32:39.592321   13562 main.go:141] libmachine: (addons-295637) Calling .Close
	I0327 17:32:39.592554   13562 main.go:141] libmachine: Successfully made call to close driver server
	I0327 17:32:39.592574   13562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 17:32:39.594636   13562 addons.go:470] Verifying addon gcp-auth=true in "addons-295637"
	I0327 17:32:39.596187   13562 out.go:177] * Verifying gcp-auth addon...
	I0327 17:32:39.598211   13562 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 17:32:39.614373   13562 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 17:32:39.614390   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:39.763283   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:39.769638   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:39.877668   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:40.063586   13562 pod_ready.go:97] error getting pod "coredns-76f75df574-tzq24" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tzq24" not found
	I0327 17:32:40.063612   13562 pod_ready.go:81] duration metric: took 12.503938517s for pod "coredns-76f75df574-tzq24" in "kube-system" namespace to be "Ready" ...
	E0327 17:32:40.063625   13562 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-tzq24" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tzq24" not found
	I0327 17:32:40.063634   13562 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vvxh8" in "kube-system" namespace to be "Ready" ...
	I0327 17:32:40.107083   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:40.251345   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:40.251846   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:40.374964   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:40.602612   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:40.750548   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:40.755605   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:40.875897   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:41.102986   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:41.251115   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:41.252260   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:41.374485   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:41.601627   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:41.751798   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:41.751875   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:41.875412   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:42.069657   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:42.102018   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:42.251663   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:42.252050   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:42.374861   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:42.603378   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:42.754246   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:42.764844   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:42.875027   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:43.102281   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:43.250942   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:43.251833   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:43.375179   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:43.601649   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:43.752059   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:43.752371   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:43.874428   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:44.079014   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:44.102861   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:44.251100   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:44.253093   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:44.374871   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:44.602420   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:44.750805   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:44.752120   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:44.875901   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:45.101659   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:45.251357   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:45.251405   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:45.376229   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:45.602308   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:45.750022   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:45.751092   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:45.874401   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:46.104316   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:46.252205   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:46.254784   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:46.375047   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:46.570175   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:46.602049   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:46.749458   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:46.750206   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:46.875696   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:47.103813   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:47.250880   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:47.251456   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:47.374922   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:47.602112   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:47.751354   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:47.751862   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:47.874885   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:48.103416   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:48.257061   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:48.257122   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:48.376635   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:48.570287   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:48.603329   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:48.755171   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:48.757319   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:48.875459   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:49.102127   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:49.251182   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:49.251265   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:49.374229   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:49.601506   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:49.750488   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:49.752017   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:49.873877   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:50.101825   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:50.252006   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:50.252444   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:50.386989   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:50.570512   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:50.604110   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:50.749391   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:50.749890   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:50.875476   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:51.102483   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:51.250948   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:51.251885   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:51.377193   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:51.601894   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:51.750314   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:51.751810   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:51.873884   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:52.101417   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:52.249777   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:52.250304   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:52.375338   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:52.571579   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:52.602581   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:52.751856   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:52.751938   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:52.876246   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:53.103389   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:53.249937   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:53.251777   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:53.376192   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:53.603054   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:53.750836   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:53.751451   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:53.878745   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:54.101664   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:54.251057   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:54.251325   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:54.375607   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:54.572635   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:54.605484   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:54.750942   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:54.752365   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:54.874156   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:55.101194   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:55.251744   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:55.252074   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:55.375608   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:55.602610   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:55.750865   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:55.751407   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:55.875205   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:56.101752   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:56.253036   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:56.253731   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:56.375163   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:56.602419   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:56.752253   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:56.753392   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:56.883162   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:57.071614   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:57.105681   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:57.249596   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:57.250607   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:57.376393   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:57.794858   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:57.797150   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:57.797635   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:57.876084   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:58.102054   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:58.250524   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:58.250929   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:58.375231   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:58.602364   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:58.749801   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:58.750115   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:58.875349   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:59.102256   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:59.250008   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:59.250026   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:59.375368   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:32:59.570331   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:32:59.602718   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:32:59.751631   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:32:59.751988   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:32:59.875314   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:00.102295   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:00.252865   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:00.254700   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:00.374717   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:00.601509   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:00.749531   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:00.751778   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:00.876284   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:01.102936   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:01.250780   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:01.252870   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:01.374610   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:01.571584   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:33:01.602757   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:01.754414   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:01.760239   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:01.875062   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:02.102741   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:02.251707   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:02.251814   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:02.377752   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:02.601726   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:02.752057   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:02.752226   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:02.873901   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:03.101707   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:03.253303   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:03.255609   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:03.374352   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:03.601676   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:03.754201   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:03.760304   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:03.874489   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:04.070666   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:33:04.101548   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:04.252458   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:04.254181   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:04.374517   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:04.601311   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:04.749734   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:04.751518   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:04.876991   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:05.114350   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:05.252372   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:05.254304   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:05.375286   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:05.602040   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:05.750404   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:05.750469   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:05.875388   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:06.071900   13562 pod_ready.go:102] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"False"
	I0327 17:33:06.101765   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:06.250620   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:06.253748   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:06.374578   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:06.601697   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:06.750212   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:06.750452   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:06.874002   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:07.370269   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:07.375743   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:07.375772   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:07.378852   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:07.579843   13562 pod_ready.go:92] pod "coredns-76f75df574-vvxh8" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.579863   13562 pod_ready.go:81] duration metric: took 27.516221254s for pod "coredns-76f75df574-vvxh8" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.579872   13562 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.599138   13562 pod_ready.go:92] pod "etcd-addons-295637" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.599169   13562 pod_ready.go:81] duration metric: took 19.287529ms for pod "etcd-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.599181   13562 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.602147   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:07.605762   13562 pod_ready.go:92] pod "kube-apiserver-addons-295637" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.605785   13562 pod_ready.go:81] duration metric: took 6.595621ms for pod "kube-apiserver-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.605797   13562 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.612856   13562 pod_ready.go:92] pod "kube-controller-manager-addons-295637" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.612877   13562 pod_ready.go:81] duration metric: took 7.06536ms for pod "kube-controller-manager-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.612888   13562 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6dqj" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.625247   13562 pod_ready.go:92] pod "kube-proxy-h6dqj" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.625267   13562 pod_ready.go:81] duration metric: took 12.371525ms for pod "kube-proxy-h6dqj" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.625279   13562 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.749460   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:07.750660   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:07.874674   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:07.974159   13562 pod_ready.go:92] pod "kube-scheduler-addons-295637" in "kube-system" namespace has status "Ready":"True"
	I0327 17:33:07.974181   13562 pod_ready.go:81] duration metric: took 348.894699ms for pod "kube-scheduler-addons-295637" in "kube-system" namespace to be "Ready" ...
	I0327 17:33:07.974200   13562 pod_ready.go:38] duration metric: took 40.424061012s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 17:33:07.974219   13562 api_server.go:52] waiting for apiserver process to appear ...
	I0327 17:33:07.974271   13562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 17:33:07.996571   13562 api_server.go:72] duration metric: took 41.262937208s to wait for apiserver process to appear ...
	I0327 17:33:07.996591   13562 api_server.go:88] waiting for apiserver healthz status ...
	I0327 17:33:07.996625   13562 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0327 17:33:08.000783   13562 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0327 17:33:08.001982   13562 api_server.go:141] control plane version: v1.29.3
	I0327 17:33:08.002002   13562 api_server.go:131] duration metric: took 5.403985ms to wait for apiserver health ...
	I0327 17:33:08.002010   13562 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 17:33:08.103342   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:08.180442   13562 system_pods.go:59] 18 kube-system pods found
	I0327 17:33:08.180468   13562 system_pods.go:61] "coredns-76f75df574-vvxh8" [10cdc727-ea2c-4087-a48b-5d7daaa0adbf] Running
	I0327 17:33:08.180475   13562 system_pods.go:61] "csi-hostpath-attacher-0" [9a4a0f5f-ca4e-47cb-8b7b-3b0934657e22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 17:33:08.180482   13562 system_pods.go:61] "csi-hostpath-resizer-0" [8d629efb-4535-485d-b3e2-36bbbe3f4109] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 17:33:08.180489   13562 system_pods.go:61] "csi-hostpathplugin-jnf29" [b5957b28-ef86-4324-9f7a-821e6485f2a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 17:33:08.180493   13562 system_pods.go:61] "etcd-addons-295637" [cf2db71a-337f-43f9-acb8-8b2dc9329f23] Running
	I0327 17:33:08.180498   13562 system_pods.go:61] "kube-apiserver-addons-295637" [e0ee83c1-2b8d-4ce6-bdfd-d193372e2f51] Running
	I0327 17:33:08.180502   13562 system_pods.go:61] "kube-controller-manager-addons-295637" [50eeac48-b3c3-4da6-b935-5cf94a5e306e] Running
	I0327 17:33:08.180505   13562 system_pods.go:61] "kube-ingress-dns-minikube" [44c54cb4-9447-497b-bdfb-9ad131d873fe] Running
	I0327 17:33:08.180508   13562 system_pods.go:61] "kube-proxy-h6dqj" [5fec4f76-0b31-4f19-aa18-a8539b1d4abf] Running
	I0327 17:33:08.180511   13562 system_pods.go:61] "kube-scheduler-addons-295637" [ddf0bb77-4080-4cb2-a347-232ceac2dd4c] Running
	I0327 17:33:08.180514   13562 system_pods.go:61] "metrics-server-69cf46c98-9xm8s" [7cdf4bf7-3b50-44f1-8b1d-9c1aa9119f76] Running
	I0327 17:33:08.180518   13562 system_pods.go:61] "nvidia-device-plugin-daemonset-t9c2x" [d9323477-3ff2-48a8-b533-d6982941056b] Running
	I0327 17:33:08.180522   13562 system_pods.go:61] "registry-96v7w" [d832ec59-db4f-49f8-9d30-8d71b9eb4114] Running
	I0327 17:33:08.180526   13562 system_pods.go:61] "registry-proxy-6zr24" [46862e2d-ebd8-4c1e-9833-05246836fd4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 17:33:08.180535   13562 system_pods.go:61] "snapshot-controller-58dbcc7b99-96gjl" [6d391681-67c2-4698-8042-21e8cf842f04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 17:33:08.180541   13562 system_pods.go:61] "snapshot-controller-58dbcc7b99-fzvn2" [38afdf0e-6149-4888-bfb7-d62d8ccc3639] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 17:33:08.180545   13562 system_pods.go:61] "storage-provisioner" [6bf4636a-6680-47f0-95c2-fe8d0097ead0] Running
	I0327 17:33:08.180548   13562 system_pods.go:61] "tiller-deploy-7b677967b9-nzj4k" [dd673c10-9d67-489b-9c54-bc28e885a8ca] Running
	I0327 17:33:08.180554   13562 system_pods.go:74] duration metric: took 178.53877ms to wait for pod list to return data ...
	I0327 17:33:08.180563   13562 default_sa.go:34] waiting for default service account to be created ...
	I0327 17:33:08.250165   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:08.250762   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:08.374058   13562 default_sa.go:45] found service account: "default"
	I0327 17:33:08.374084   13562 default_sa.go:55] duration metric: took 193.514647ms for default service account to be created ...
	I0327 17:33:08.374093   13562 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 17:33:08.377927   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:08.580277   13562 system_pods.go:86] 18 kube-system pods found
	I0327 17:33:08.580302   13562 system_pods.go:89] "coredns-76f75df574-vvxh8" [10cdc727-ea2c-4087-a48b-5d7daaa0adbf] Running
	I0327 17:33:08.580313   13562 system_pods.go:89] "csi-hostpath-attacher-0" [9a4a0f5f-ca4e-47cb-8b7b-3b0934657e22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 17:33:08.580322   13562 system_pods.go:89] "csi-hostpath-resizer-0" [8d629efb-4535-485d-b3e2-36bbbe3f4109] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 17:33:08.580332   13562 system_pods.go:89] "csi-hostpathplugin-jnf29" [b5957b28-ef86-4324-9f7a-821e6485f2a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 17:33:08.580340   13562 system_pods.go:89] "etcd-addons-295637" [cf2db71a-337f-43f9-acb8-8b2dc9329f23] Running
	I0327 17:33:08.580346   13562 system_pods.go:89] "kube-apiserver-addons-295637" [e0ee83c1-2b8d-4ce6-bdfd-d193372e2f51] Running
	I0327 17:33:08.580354   13562 system_pods.go:89] "kube-controller-manager-addons-295637" [50eeac48-b3c3-4da6-b935-5cf94a5e306e] Running
	I0327 17:33:08.580360   13562 system_pods.go:89] "kube-ingress-dns-minikube" [44c54cb4-9447-497b-bdfb-9ad131d873fe] Running
	I0327 17:33:08.580367   13562 system_pods.go:89] "kube-proxy-h6dqj" [5fec4f76-0b31-4f19-aa18-a8539b1d4abf] Running
	I0327 17:33:08.580377   13562 system_pods.go:89] "kube-scheduler-addons-295637" [ddf0bb77-4080-4cb2-a347-232ceac2dd4c] Running
	I0327 17:33:08.580384   13562 system_pods.go:89] "metrics-server-69cf46c98-9xm8s" [7cdf4bf7-3b50-44f1-8b1d-9c1aa9119f76] Running
	I0327 17:33:08.580393   13562 system_pods.go:89] "nvidia-device-plugin-daemonset-t9c2x" [d9323477-3ff2-48a8-b533-d6982941056b] Running
	I0327 17:33:08.580400   13562 system_pods.go:89] "registry-96v7w" [d832ec59-db4f-49f8-9d30-8d71b9eb4114] Running
	I0327 17:33:08.580409   13562 system_pods.go:89] "registry-proxy-6zr24" [46862e2d-ebd8-4c1e-9833-05246836fd4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 17:33:08.580421   13562 system_pods.go:89] "snapshot-controller-58dbcc7b99-96gjl" [6d391681-67c2-4698-8042-21e8cf842f04] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 17:33:08.580432   13562 system_pods.go:89] "snapshot-controller-58dbcc7b99-fzvn2" [38afdf0e-6149-4888-bfb7-d62d8ccc3639] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 17:33:08.580444   13562 system_pods.go:89] "storage-provisioner" [6bf4636a-6680-47f0-95c2-fe8d0097ead0] Running
	I0327 17:33:08.580454   13562 system_pods.go:89] "tiller-deploy-7b677967b9-nzj4k" [dd673c10-9d67-489b-9c54-bc28e885a8ca] Running
	I0327 17:33:08.580462   13562 system_pods.go:126] duration metric: took 206.362483ms to wait for k8s-apps to be running ...
	I0327 17:33:08.580471   13562 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 17:33:08.580536   13562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 17:33:08.602469   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:08.617461   13562 system_svc.go:56] duration metric: took 36.982096ms WaitForService to wait for kubelet
	I0327 17:33:08.617484   13562 kubeadm.go:576] duration metric: took 41.883851296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 17:33:08.617506   13562 node_conditions.go:102] verifying NodePressure condition ...
	I0327 17:33:08.752454   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:08.752584   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:08.774682   13562 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 17:33:08.774704   13562 node_conditions.go:123] node cpu capacity is 2
	I0327 17:33:08.774714   13562 node_conditions.go:105] duration metric: took 157.203377ms to run NodePressure ...
	I0327 17:33:08.774725   13562 start.go:240] waiting for startup goroutines ...
	I0327 17:33:08.874814   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:09.104620   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:09.250294   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:09.251216   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:09.373944   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:09.601526   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:09.750294   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:09.751600   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:09.875139   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:10.102614   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:10.251502   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:10.252716   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:10.376534   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:10.601852   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:10.749741   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:10.750352   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:10.875823   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:11.101731   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:11.251702   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:11.251906   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:11.374729   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:11.603085   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:11.750439   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:11.750779   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:11.875642   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:12.102811   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:12.253520   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:12.255280   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:12.374728   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:12.602525   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:12.749449   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:12.751486   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 17:33:12.874394   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:13.103041   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:13.249999   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:13.251361   13562 kapi.go:107] duration metric: took 37.008188864s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 17:33:13.374789   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:13.602441   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:13.750180   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:13.874159   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:14.103872   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:14.250697   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:14.377214   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:14.832070   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:14.832160   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:14.877397   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:15.101805   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:15.250313   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:15.375368   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:15.602722   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:15.750083   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:15.874832   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:16.102553   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:16.250338   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:16.375088   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:16.602465   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:16.750110   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:16.875806   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:17.101630   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:17.250523   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:17.381283   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:17.606097   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:17.750277   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:17.875068   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:18.102800   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:18.250395   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:18.378585   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:18.601991   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:18.749882   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:18.877522   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:19.102974   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:19.251507   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:19.388241   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:19.602640   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:19.750119   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:19.875110   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:20.103150   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:20.249251   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:20.375800   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:20.603156   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:20.754575   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:20.880387   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:21.116062   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:21.253513   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:21.386528   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:21.601792   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:21.751958   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:21.893977   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:22.102585   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:22.251572   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:22.374220   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:22.602766   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:22.752786   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:22.875702   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:23.102129   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:23.252514   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:23.374655   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:23.604217   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:23.750086   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:23.876066   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:24.102078   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:24.249702   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:24.375467   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:24.613258   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:24.749814   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:24.874876   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:25.101483   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:25.250133   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:25.375379   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:25.602276   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:25.749399   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:25.873834   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:26.102138   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:26.250957   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:26.379945   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:26.602181   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:26.752814   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:26.876349   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:27.103681   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:27.249841   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:27.382187   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:27.602495   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:27.750016   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:27.874721   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:28.102484   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:28.249699   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:28.376981   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:28.602111   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:28.749302   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:28.875886   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:29.105413   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:29.250231   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:29.377025   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:29.602156   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:29.750940   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:30.065009   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:30.107499   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:30.250828   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:30.374331   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:30.603712   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:30.757584   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:30.874392   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:31.102636   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:31.252115   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:31.377554   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:31.603031   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:31.755571   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:31.890308   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:32.104309   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:32.250470   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:32.375621   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:32.603956   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:32.753686   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:32.874365   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:33.102542   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:33.252296   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:33.379581   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:33.602732   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:33.751756   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:33.876117   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:34.102758   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:34.252270   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:34.375392   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:34.607066   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:34.762297   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:34.874728   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:35.259158   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:35.259940   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:35.391266   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:35.602019   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:35.758467   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:35.876426   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:36.111975   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:36.250769   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:36.380932   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:36.603090   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:36.749847   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:36.875110   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:37.123300   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:37.250491   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:37.374827   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:37.601476   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:37.750564   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:37.889756   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:38.102627   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:38.251901   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:38.375021   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:38.618876   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:38.750181   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:38.874798   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:39.103128   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:39.251366   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:39.374063   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:39.602454   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:39.751123   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:39.881778   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:40.102395   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:40.250219   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:40.375900   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:40.602564   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:40.755902   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:40.875636   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:41.102624   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:41.250274   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:41.376596   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:41.616568   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:41.752652   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:41.876603   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:42.107749   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:42.253355   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:42.394912   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:42.608518   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:42.753174   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:42.876860   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:43.102735   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:43.250366   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:43.378835   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:43.601660   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:43.750656   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:43.878280   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:44.102347   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:44.249796   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:44.381503   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:44.603323   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:44.750044   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:44.876395   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:45.102507   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:45.250714   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:45.800779   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:45.804179   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:45.806208   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:45.875328   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:46.102469   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:46.253088   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:46.378577   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:46.602831   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:46.755626   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:46.874688   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:47.110648   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:47.250888   13562 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 17:33:47.378269   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:47.602265   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:47.751477   13562 kapi.go:107] duration metric: took 1m11.507194849s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 17:33:47.886040   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:48.110268   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:48.375740   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:48.602661   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:48.880068   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 17:33:49.103448   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:49.383820   13562 kapi.go:107] duration metric: took 1m11.515278963s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 17:33:49.602412   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:50.103194   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:50.602179   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:51.102404   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:51.602470   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:52.102025   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:52.602857   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:53.102281   13562 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 17:33:53.602410   13562 kapi.go:107] duration metric: took 1m14.004193704s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 17:33:53.604146   13562 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-295637 cluster.
	I0327 17:33:53.605605   13562 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 17:33:53.606867   13562 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 17:33:53.608126   13562 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0327 17:33:53.609382   13562 addons.go:505] duration metric: took 1m26.875748938s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns storage-provisioner-rancher metrics-server nvidia-device-plugin inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0327 17:33:53.609417   13562 start.go:245] waiting for cluster config update ...
	I0327 17:33:53.609449   13562 start.go:254] writing updated cluster config ...
	I0327 17:33:53.609683   13562 ssh_runner.go:195] Run: rm -f paused
	I0327 17:33:53.663896   13562 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 17:33:53.665631   13562 out.go:177] * Done! kubectl is now configured to use "addons-295637" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a15d5c012834a       98f6c3b32d565       19 seconds ago       Exited              helm-test                                0                   80b43bbadab80       helm-test
	d8e18572e4520       db2fc13d44d50       38 seconds ago       Running             gcp-auth                                 0                   932b1f6dc1c00       gcp-auth-7d69788767-lxb4q
	9563a19afd34c       738351fd438f0       42 seconds ago       Running             csi-snapshotter                          0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	c2456b656f0b4       ffcc66479b5ba       44 seconds ago       Running             controller                               0                   1b095604d3aac       ingress-nginx-controller-65496f9567-mp5wr
	5251af95ff041       b29d748098e32       48 seconds ago       Exited              patch                                    2                   65aaf9b5549ed       ingress-nginx-admission-patch-8s84w
	5efc18c471a71       931dbfd16f87c       50 seconds ago       Running             csi-provisioner                          0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	8c88f8819d177       e899260153aed       52 seconds ago       Running             liveness-probe                           0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	6de2267252718       e255e073c508c       53 seconds ago       Running             hostpath                                 0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	77929332c14ed       88ef14a257f42       54 seconds ago       Running             node-driver-registrar                    0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	7c26ff2e6cde1       19a639eda60f0       56 seconds ago       Running             csi-resizer                              0                   eba9359bdc69d       csi-hostpath-resizer-0
	8879c693bd694       59cbb42146a37       58 seconds ago       Running             csi-attacher                             0                   a7301f5cc49f1       csi-hostpath-attacher-0
	42202fb96c899       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   98d151dfe3e8b       csi-hostpathplugin-jnf29
	625890b1136f3       b29d748098e32       About a minute ago   Exited              create                                   0                   fde133fd27af1       ingress-nginx-admission-create-l2s6r
	dc910bd740de4       31de47c733c91       About a minute ago   Running             yakd                                     0                   0a3cc5d6e9402       yakd-dashboard-9947fc6bf-gvvf2
	9d2cfc8cfbe6a       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   1f9589f9a11a3       snapshot-controller-58dbcc7b99-fzvn2
	550d1859d3f60       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   e9665f3db738a       snapshot-controller-58dbcc7b99-96gjl
	8a87380a84634       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   aed4d834a7ddc       local-path-provisioner-78b46b4d5c-ktkp9
	58f1256461d11       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   cbba37556205c       kube-ingress-dns-minikube
	768d306fb0c59       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   d8adefdb8a5e4       storage-provisioner
	676caeeb07ae3       cbb01a7bd410d       2 minutes ago        Running             coredns                                  0                   858fcc849e7d1       coredns-76f75df574-vvxh8
	fcaa7908b9040       a1d263b5dc5b0       2 minutes ago        Running             kube-proxy                               0                   313f0913916b6       kube-proxy-h6dqj
	5435db6245823       8c390d98f50c0       2 minutes ago        Running             kube-scheduler                           0                   4527aa62dca20       kube-scheduler-addons-295637
	ddd752818f4ab       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   91b6f2790e328       etcd-addons-295637
	930bbef1ecb13       6052a25da3f97       2 minutes ago        Running             kube-controller-manager                  0                   23c799ac32dd6       kube-controller-manager-addons-295637
	c34798f011fba       39f995c9f1996       2 minutes ago        Running             kube-apiserver                           0                   c0c8bb25126a0       kube-apiserver-addons-295637
	
	
	==> containerd <==
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.736758446Z" level=info msg="StopContainer for \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" with timeout 30 (s)"
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.737311444Z" level=info msg="Stop container \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" with signal terminated"
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.827059331Z" level=info msg="shim disconnected" id=e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8 namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.827187790Z" level=warning msg="cleaning up after shim disconnected" id=e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8 namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.827197400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.881753234Z" level=info msg="StopContainer for \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" returns successfully"
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.882231587Z" level=info msg="StopPodSandbox for \"b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe\""
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.882358142Z" level=info msg="Container to stop \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.935949450Z" level=info msg="shim disconnected" id=b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.936409639Z" level=warning msg="cleaning up after shim disconnected" id=b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.936494458Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 27 17:34:29 addons-295637 containerd[651]: time="2024-03-27T17:34:29.963846620Z" level=warning msg="cleanup warnings time=\"2024-03-27T17:34:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.049931734Z" level=info msg="TearDown network for sandbox \"b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe\" successfully"
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.049993826Z" level=info msg="StopPodSandbox for \"b466db681f1c93b29ab3c70658ffe30702a86f4818a1b829bf21f8010692e9fe\" returns successfully"
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.587474429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:2d4085c0-fb07-44a2-8e15-a83acb458290,Namespace:default,Attempt:0,}"
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.663291452Z" level=info msg="RemoveContainer for \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\""
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.694714301Z" level=info msg="RemoveContainer for \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" returns successfully"
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.695765094Z" level=error msg="ContainerStatus for \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\": not found"
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.745679463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.745765798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.745807323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.746361987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.852727963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:2d4085c0-fb07-44a2-8e15-a83acb458290,Namespace:default,Attempt:0,} returns sandbox id \"f95d36e09c590c743eba90b5314055a432cff56ba073f825e8e0a21a0c488595\""
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.856074822Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 27 17:34:30 addons-295637 containerd[651]: time="2024-03-27T17:34:30.861744355Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [676caeeb07ae3e2f34670d381e878adfc551b6a0a124ed926cd05920c9f2584f] <==
	[INFO] 10.244.0.8:51822 - 22272 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000794177s
	[INFO] 10.244.0.8:36430 - 60284 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118964s
	[INFO] 10.244.0.8:36430 - 24190 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000226103s
	[INFO] 10.244.0.8:60269 - 28440 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000158519s
	[INFO] 10.244.0.8:60269 - 29982 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000414877s
	[INFO] 10.244.0.8:53839 - 31284 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100507s
	[INFO] 10.244.0.8:53839 - 19002 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000292175s
	[INFO] 10.244.0.8:53772 - 31168 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070732s
	[INFO] 10.244.0.8:53772 - 53191 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00028237s
	[INFO] 10.244.0.8:52391 - 33999 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066614s
	[INFO] 10.244.0.8:52391 - 30412 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008865s
	[INFO] 10.244.0.8:51624 - 7314 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006907s
	[INFO] 10.244.0.8:51624 - 56725 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000825118s
	[INFO] 10.244.0.8:40637 - 39549 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000059563s
	[INFO] 10.244.0.8:40637 - 17530 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00004785s
	[INFO] 10.244.0.22:45214 - 21293 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000389228s
	[INFO] 10.244.0.22:38065 - 25769 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000744015s
	[INFO] 10.244.0.22:45519 - 40382 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115603s
	[INFO] 10.244.0.22:52050 - 56772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015861s
	[INFO] 10.244.0.22:55570 - 12400 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111344s
	[INFO] 10.244.0.22:45217 - 60172 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000488362s
	[INFO] 10.244.0.22:55013 - 53314 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.002344389s
	[INFO] 10.244.0.22:52735 - 291 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003367162s
	[INFO] 10.244.0.25:60324 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000413608s
	[INFO] 10.244.0.25:46641 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143244s
	
	
	==> describe nodes <==
	Name:               addons-295637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-295637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=addons-295637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T17_32_13_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-295637
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-295637"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 17:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-295637
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 17:34:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 17:34:16 +0000   Wed, 27 Mar 2024 17:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 17:34:16 +0000   Wed, 27 Mar 2024 17:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 17:34:16 +0000   Wed, 27 Mar 2024 17:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 17:34:16 +0000   Wed, 27 Mar 2024 17:32:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    addons-295637
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f239f5014fc42bf906399c3a5fe059b
	  System UUID:                9f239f50-14fc-42bf-9063-99c3a5fe059b
	  Boot ID:                    48c66fbf-bbb3-4c64-9776-eb8a07265064
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  gcp-auth                    gcp-auth-7d69788767-lxb4q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  ingress-nginx               ingress-nginx-controller-65496f9567-mp5wr    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         116s
	  kube-system                 coredns-76f75df574-vvxh8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m4s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 csi-hostpathplugin-jnf29                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 etcd-addons-295637                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-apiserver-addons-295637                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-addons-295637        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-h6dqj                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-scheduler-addons-295637                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 snapshot-controller-58dbcc7b99-96gjl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 snapshot-controller-58dbcc7b99-fzvn2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ktkp9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gvvf2               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m3s   kube-proxy       
	  Normal  Starting                 2m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s  kubelet          Node addons-295637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s  kubelet          Node addons-295637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s  kubelet          Node addons-295637 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m18s  kubelet          Node addons-295637 status is now: NodeReady
	  Normal  RegisteredNode           2m5s   node-controller  Node addons-295637 event: Registered Node addons-295637 in Controller
	
	
	==> dmesg <==
	[  +0.302558] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +5.060307] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.066888] kauditd_printk_skb: 158 callbacks suppressed
	[Mar27 17:32] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +4.721456] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.063840] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.206133] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.068814] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.730951] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[  +0.139312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.589573] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.001778] kauditd_printk_skb: 112 callbacks suppressed
	[  +6.623742] kauditd_printk_skb: 75 callbacks suppressed
	[Mar27 17:33] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.218830] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.200816] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.465066] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.178343] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.329177] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.744949] kauditd_printk_skb: 47 callbacks suppressed
	[Mar27 17:34] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.151851] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.409134] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.158324] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.762924] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [ddd752818f4ab86096650eddd88354ea38dde3b854d04fe1ec666113cf9ee9b4] <==
	{"level":"warn","ts":"2024-03-27T17:33:07.356114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.31144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-03-27T17:33:07.356163Z","caller":"traceutil/trace.go:171","msg":"trace[475358277] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:940; }","duration":"265.361751ms","start":"2024-03-27T17:33:07.090795Z","end":"2024-03-27T17:33:07.356157Z","steps":["trace[475358277] 'agreement among raft nodes before linearized reading'  (duration: 265.272102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:14.818196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.082433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-03-27T17:33:14.81837Z","caller":"traceutil/trace.go:171","msg":"trace[2068568474] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:961; }","duration":"228.284271ms","start":"2024-03-27T17:33:14.590068Z","end":"2024-03-27T17:33:14.818352Z","steps":["trace[2068568474] 'range keys from in-memory index tree'  (duration: 227.974625ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:30.0441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.809372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85664"}
	{"level":"info","ts":"2024-03-27T17:33:30.044184Z","caller":"traceutil/trace.go:171","msg":"trace[109207206] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1021; }","duration":"183.932708ms","start":"2024-03-27T17:33:29.860231Z","end":"2024-03-27T17:33:30.044164Z","steps":["trace[109207206] 'range keys from in-memory index tree'  (duration: 183.554956ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:30.044387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.123648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-27T17:33:30.044412Z","caller":"traceutil/trace.go:171","msg":"trace[822682199] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1021; }","duration":"131.180247ms","start":"2024-03-27T17:33:29.913224Z","end":"2024-03-27T17:33:30.044404Z","steps":["trace[822682199] 'count revisions from in-memory index tree'  (duration: 131.041049ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T17:33:35.238598Z","caller":"traceutil/trace.go:171","msg":"trace[963313081] linearizableReadLoop","detail":"{readStateIndex:1087; appliedIndex:1086; }","duration":"248.289299ms","start":"2024-03-27T17:33:34.990294Z","end":"2024-03-27T17:33:35.238583Z","steps":["trace[963313081] 'read index received'  (duration: 242.309585ms)","trace[963313081] 'applied index is now lower than readState.Index'  (duration: 5.978476ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T17:33:35.238699Z","caller":"traceutil/trace.go:171","msg":"trace[1869102115] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"256.41932ms","start":"2024-03-27T17:33:34.98227Z","end":"2024-03-27T17:33:35.238689Z","steps":["trace[1869102115] 'process raft request'  (duration: 250.369777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:35.238978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.670454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" ","response":"range_response_count:1 size:1742"}
	{"level":"info","ts":"2024-03-27T17:33:35.239033Z","caller":"traceutil/trace.go:171","msg":"trace[48185787] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1057; }","duration":"248.757595ms","start":"2024-03-27T17:33:34.990266Z","end":"2024-03-27T17:33:35.239023Z","steps":["trace[48185787] 'agreement among raft nodes before linearized reading'  (duration: 248.600153ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:35.239158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.432462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-27T17:33:35.239202Z","caller":"traceutil/trace.go:171","msg":"trace[1196058607] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1057; }","duration":"214.496493ms","start":"2024-03-27T17:33:35.024699Z","end":"2024-03-27T17:33:35.239196Z","steps":["trace[1196058607] 'agreement among raft nodes before linearized reading'  (duration: 214.441632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:35.239444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.799761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-27T17:33:35.239497Z","caller":"traceutil/trace.go:171","msg":"trace[957195896] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1057; }","duration":"103.876032ms","start":"2024-03-27T17:33:35.135615Z","end":"2024-03-27T17:33:35.239491Z","steps":["trace[957195896] 'agreement among raft nodes before linearized reading'  (duration: 103.812293ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:35.239697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.100399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-03-27T17:33:35.239741Z","caller":"traceutil/trace.go:171","msg":"trace[2017319161] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1057; }","duration":"150.164452ms","start":"2024-03-27T17:33:35.089571Z","end":"2024-03-27T17:33:35.239735Z","steps":["trace[2017319161] 'agreement among raft nodes before linearized reading'  (duration: 150.061269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:35.239857Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.240728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-lh6jb\" ","response":"range_response_count:1 size:3404"}
	{"level":"info","ts":"2024-03-27T17:33:35.239871Z","caller":"traceutil/trace.go:171","msg":"trace[121386041] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-lh6jb; range_end:; response_count:1; response_revision:1057; }","duration":"202.256273ms","start":"2024-03-27T17:33:35.03761Z","end":"2024-03-27T17:33:35.239866Z","steps":["trace[121386041] 'agreement among raft nodes before linearized reading'  (duration: 202.200493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:45.785735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.375716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-03-27T17:33:45.785854Z","caller":"traceutil/trace.go:171","msg":"trace[727691698] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1131; }","duration":"197.524652ms","start":"2024-03-27T17:33:45.588309Z","end":"2024-03-27T17:33:45.785834Z","steps":["trace[727691698] 'range keys from in-memory index tree'  (duration: 197.28005ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:45.785879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.571683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85788"}
	{"level":"info","ts":"2024-03-27T17:33:45.785915Z","caller":"traceutil/trace.go:171","msg":"trace[875222105] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1131; }","duration":"425.641474ms","start":"2024-03-27T17:33:45.360266Z","end":"2024-03-27T17:33:45.785907Z","steps":["trace[875222105] 'range keys from in-memory index tree'  (duration: 425.375184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T17:33:45.785935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T17:33:45.360249Z","time spent":"425.680722ms","remote":"127.0.0.1:46716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85811,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	
	
	==> gcp-auth [d8e18572e45207b9735d22f4a69699cb3d816669241632cb93a9f0ca84b6c360] <==
	2024/03/27 17:33:52 GCP Auth Webhook started!
	2024/03/27 17:33:53 Ready to marshal response ...
	2024/03/27 17:33:54 Ready to write response ...
	2024/03/27 17:33:54 Ready to marshal response ...
	2024/03/27 17:33:54 Ready to write response ...
	2024/03/27 17:34:04 Ready to marshal response ...
	2024/03/27 17:34:04 Ready to write response ...
	2024/03/27 17:34:05 Ready to marshal response ...
	2024/03/27 17:34:05 Ready to write response ...
	2024/03/27 17:34:06 Ready to marshal response ...
	2024/03/27 17:34:06 Ready to write response ...
	2024/03/27 17:34:13 Ready to marshal response ...
	2024/03/27 17:34:13 Ready to write response ...
	2024/03/27 17:34:14 Ready to marshal response ...
	2024/03/27 17:34:14 Ready to write response ...
	2024/03/27 17:34:30 Ready to marshal response ...
	2024/03/27 17:34:30 Ready to write response ...
	
	
	==> kernel <==
	 17:34:31 up 2 min,  0 users,  load average: 1.99, 1.32, 0.54
	Linux addons-295637 5.10.207 #1 SMP Wed Mar 20 21:49:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c34798f011fbad0704982351bb2a022a7e81581cc80dff411f456121080b5409] <==
	I0327 17:32:35.076142       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 17:32:35.628187       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.100.105.196"}
	I0327 17:32:35.694267       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.110.221.239"}
	I0327 17:32:35.772353       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0327 17:32:37.091191       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.226.194"}
	I0327 17:32:37.132908       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0327 17:32:37.557256       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.98.215.108"}
	I0327 17:32:39.338746       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.51.42"}
	E0327 17:33:04.915960       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.104.52:443: connect: connection refused
	W0327 17:33:04.916581       1 handler_proxy.go:93] no RequestInfo found in the context
	E0327 17:33:04.916989       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0327 17:33:04.918021       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.104.52:443: connect: connection refused
	E0327 17:33:04.921459       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.104.52:443: connect: connection refused
	E0327 17:33:04.942620       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.104.52:443: connect: connection refused
	E0327 17:33:04.984378       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.104.52:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.104.52:443: connect: connection refused
	I0327 17:33:05.122002       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0327 17:34:05.944075       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0327 17:34:22.579920       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0327 17:34:23.095865       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 17:34:24.305368       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0327 17:34:24.566319       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	W0327 17:34:25.400787       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 17:34:30.114078       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 17:34:30.318392       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.167.38"}
	
	
	==> kube-controller-manager [930bbef1ecb135c98d625a6e25a520cd28af5d1e3d3c5700bf3aa70c6a96cea9] <==
	I0327 17:33:56.008324       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 17:34:00.198102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="48.665831ms"
	I0327 17:34:00.198497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="90.855µs"
	I0327 17:34:00.376224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="5.328µs"
	I0327 17:34:07.341066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="75.005µs"
	I0327 17:34:08.028902       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 17:34:08.034713       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 17:34:08.081185       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0327 17:34:08.082692       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0327 17:34:10.272897       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="17.769µs"
	I0327 17:34:11.008237       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 17:34:12.515181       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 17:34:17.390977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="7.913µs"
	I0327 17:34:24.982816       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	E0327 17:34:25.403261       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 17:34:26.008795       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0327 17:34:26.283978       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0327 17:34:26.284040       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 17:34:26.768096       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0327 17:34:26.768143       1 shared_informer.go:318] Caches are synced for garbage collector
	W0327 17:34:26.907885       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 17:34:26.908192       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 17:34:29.711695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="12.39µs"
	W0327 17:34:29.820712       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 17:34:29.820779       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fcaa7908b9040018e28440acb35af04868aa8ec786cbbfaa3e7e8741ce4d357b] <==
	I0327 17:32:27.938787       1 server_others.go:72] "Using iptables proxy"
	I0327 17:32:27.962841       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0327 17:32:28.116932       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 17:32:28.116953       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 17:32:28.116965       1 server_others.go:168] "Using iptables Proxier"
	I0327 17:32:28.122666       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 17:32:28.122832       1 server.go:865] "Version info" version="v1.29.3"
	I0327 17:32:28.122844       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 17:32:28.133387       1 config.go:188] "Starting service config controller"
	I0327 17:32:28.133408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 17:32:28.133490       1 config.go:97] "Starting endpoint slice config controller"
	I0327 17:32:28.133495       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 17:32:28.134365       1 config.go:315] "Starting node config controller"
	I0327 17:32:28.134373       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 17:32:28.234000       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 17:32:28.234043       1 shared_informer.go:318] Caches are synced for service config
	I0327 17:32:28.234417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5435db6245823987deb678c16d997a8710cd12369963ce081d9533c181bf0f42] <==
	W0327 17:32:09.840767       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 17:32:09.841305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 17:32:10.652301       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 17:32:10.652329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 17:32:10.660459       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 17:32:10.660510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 17:32:10.660868       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 17:32:10.660883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 17:32:10.684760       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 17:32:10.684937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 17:32:10.727199       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 17:32:10.727343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 17:32:10.842631       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 17:32:10.842880       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 17:32:10.895619       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 17:32:10.895826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 17:32:10.985094       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 17:32:10.986480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 17:32:10.992857       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 17:32:10.993009       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 17:32:10.996998       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 17:32:10.997048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 17:32:11.083773       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 17:32:11.083838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0327 17:32:13.322141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.226959    1234 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t2l62\" (UniqueName: \"kubernetes.io/projected/b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e-kube-api-access-t2l62\") on node \"addons-295637\" DevicePath \"\""
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.274384    1234 topology_manager.go:215] "Topology Admit Handler" podUID="2d4085c0-fb07-44a2-8e15-a83acb458290" podNamespace="default" podName="nginx"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.274747    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0156c95-5ac4-4ac0-9d2e-ab0e0c301e54" containerName="task-pv-container"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.274948    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275089    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9323477-3ff2-48a8-b533-d6982941056b" containerName="nvidia-device-plugin-ctr"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275230    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd673c10-9d67-489b-9c54-bc28e885a8ca" containerName="tiller"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275367    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275461    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275498    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f017e9a6-b697-4715-9908-ad31f8708861" containerName="helm-test"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.275742    1234 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e" containerName="cloud-spanner-emulator"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276015    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9323477-3ff2-48a8-b533-d6982941056b" containerName="nvidia-device-plugin-ctr"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276140    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="f017e9a6-b697-4715-9908-ad31f8708861" containerName="helm-test"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276253    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e" containerName="cloud-spanner-emulator"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276393    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276481    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd673c10-9d67-489b-9c54-bc28e885a8ca" containerName="tiller"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276606    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0156c95-5ac4-4ac0-9d2e-ab0e0c301e54" containerName="task-pv-container"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276719    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.276838    1234 memory_manager.go:354] "RemoveStaleState removing state" podUID="644c9765-3bf8-4c8e-afe3-b7f71c78bc6a" containerName="gadget"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.328426    1234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhm92\" (UniqueName: \"kubernetes.io/projected/2d4085c0-fb07-44a2-8e15-a83acb458290-kube-api-access-rhm92\") pod \"nginx\" (UID: \"2d4085c0-fb07-44a2-8e15-a83acb458290\") " pod="default/nginx"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.329075    1234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2d4085c0-fb07-44a2-8e15-a83acb458290-gcp-creds\") pod \"nginx\" (UID: \"2d4085c0-fb07-44a2-8e15-a83acb458290\") " pod="default/nginx"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.652264    1234 scope.go:117] "RemoveContainer" containerID="e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.695032    1234 scope.go:117] "RemoveContainer" containerID="e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: E0327 17:34:30.696703    1234 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\": not found" containerID="e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8"
	Mar 27 17:34:30 addons-295637 kubelet[1234]: I0327 17:34:30.696768    1234 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8"} err="failed to get container status \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5c848268976f079d89bc04991bbcac961859fb4a0b2045c373e05ec2b3fe6b8\": not found"
	Mar 27 17:34:31 addons-295637 kubelet[1234]: I0327 17:34:31.339362    1234 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e" path="/var/lib/kubelet/pods/b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e/volumes"
	
	
	==> storage-provisioner [768d306fb0c5939bfbcb9fd8db83dee095ca15bdb63c60e4ef1cc15e4aa037bc] <==
	I0327 17:32:35.608805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 17:32:35.748169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 17:32:35.748399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 17:32:35.820218       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 17:32:35.822347       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-295637_2dd874d8-c8d8-4fe7-8952-669d4a0d2515!
	I0327 17:32:35.823236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2bfd6320-f75c-4d31-99bc-e66b41f3754b", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-295637_2dd874d8-c8d8-4fe7-8952-669d4a0d2515 became leader
	I0327 17:32:36.128752       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-295637_2dd874d8-c8d8-4fe7-8952-669d4a0d2515!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-295637 -n addons-295637
helpers_test.go:261: (dbg) Run:  kubectl --context addons-295637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-l2s6r ingress-nginx-admission-patch-8s84w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-295637 describe pod nginx ingress-nginx-admission-create-l2s6r ingress-nginx-admission-patch-8s84w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-295637 describe pod nginx ingress-nginx-admission-create-l2s6r ingress-nginx-admission-patch-8s84w: exit status 1 (68.139255ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-295637/192.168.39.182
	Start Time:       Wed, 27 Mar 2024 17:34:30 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhm92 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rhm92:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/nginx to addons-295637
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l2s6r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8s84w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-295637 describe pod nginx ingress-nginx-admission-create-l2s6r ingress-nginx-admission-patch-8s84w: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.76s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 46.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.29.3/json-events 13.17
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-beta.0/json-events 47.4
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 62.76
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 144.85
38 TestAddons/parallel/Registry 16.82
39 TestAddons/parallel/Ingress 21.9
40 TestAddons/parallel/InspektorGadget 11.96
41 TestAddons/parallel/MetricsServer 6.91
42 TestAddons/parallel/HelmTiller 17.08
44 TestAddons/parallel/CSI 58.4
46 TestAddons/parallel/CloudSpanner 6.64
47 TestAddons/parallel/LocalPath 56.5
48 TestAddons/parallel/NvidiaDevicePlugin 6.64
49 TestAddons/parallel/Yakd 6.02
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 92.73
54 TestCertOptions 101.81
55 TestCertExpiration 344.92
57 TestForceSystemdFlag 85.03
58 TestForceSystemdEnv 72.06
60 TestKVMDriverInstallOrUpdate 5.1
64 TestErrorSpam/setup 46.85
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.63
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 4.6
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.42
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 26.57
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.69
81 TestFunctional/serial/CacheCmd/cache/add_local 2.57
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 50.46
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.48
92 TestFunctional/serial/LogsFileCmd 1.45
93 TestFunctional/serial/InvalidService 3.55
95 TestFunctional/parallel/ConfigCmd 0.39
96 TestFunctional/parallel/DashboardCmd 32.01
97 TestFunctional/parallel/DryRun 0.29
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.91
103 TestFunctional/parallel/ServiceCmdConnect 64.48
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 67.6
107 TestFunctional/parallel/SSHCmd 0.4
108 TestFunctional/parallel/CpCmd 1.31
109 TestFunctional/parallel/MySQL 27.86
110 TestFunctional/parallel/FileSync 0.21
111 TestFunctional/parallel/CertSync 1.44
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
119 TestFunctional/parallel/License 0.63
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.73
125 TestFunctional/parallel/ImageCommands/Setup 2.14
135 TestFunctional/parallel/ServiceCmd/DeployApp 63.15
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.11
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.78
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.95
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.51
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
144 TestFunctional/parallel/ProfileCmd/profile_list 0.27
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
146 TestFunctional/parallel/MountCmd/any-port 43.32
147 TestFunctional/parallel/MountCmd/specific-port 1.66
148 TestFunctional/parallel/ServiceCmd/List 0.25
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
152 TestFunctional/parallel/ServiceCmd/Format 0.32
153 TestFunctional/parallel/ServiceCmd/URL 0.33
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
157 TestFunctional/parallel/Version/short 0.05
158 TestFunctional/parallel/Version/components 0.48
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 277.01
166 TestMultiControlPlane/serial/DeployApp 6.37
167 TestMultiControlPlane/serial/PingHostFromPods 1.33
168 TestMultiControlPlane/serial/AddWorkerNode 47.81
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
171 TestMultiControlPlane/serial/CopyFile 13.67
172 TestMultiControlPlane/serial/StopSecondaryNode 93.15
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
174 TestMultiControlPlane/serial/RestartSecondaryNode 41.57
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 499.7
177 TestMultiControlPlane/serial/DeleteSecondaryNode 7.15
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
179 TestMultiControlPlane/serial/StopCluster 276.44
180 TestMultiControlPlane/serial/RestartCluster 118.08
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMultiControlPlane/serial/AddSecondaryNode 69.67
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 61.12
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.71
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.36
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.2
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 93.37
219 TestMountStart/serial/StartWithMountFirst 29.61
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 29.25
222 TestMountStart/serial/VerifyMountSecond 0.38
223 TestMountStart/serial/DeleteFirst 0.86
224 TestMountStart/serial/VerifyMountPostDelete 0.38
225 TestMountStart/serial/Stop 1.76
226 TestMountStart/serial/RestartStopped 23.55
227 TestMountStart/serial/VerifyMountPostStop 0.38
230 TestMultiNode/serial/FreshStart2Nodes 105.32
231 TestMultiNode/serial/DeployApp2Nodes 4.93
232 TestMultiNode/serial/PingHostFrom2Pods 0.86
233 TestMultiNode/serial/AddNode 40.91
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.48
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 24.23
239 TestMultiNode/serial/RestartKeepsNodes 295.16
240 TestMultiNode/serial/DeleteNode 2.16
241 TestMultiNode/serial/StopMultiNode 184.07
242 TestMultiNode/serial/RestartMultiNode 125.37
243 TestMultiNode/serial/ValidateNameConflict 49
248 TestPreload 346.8
250 TestScheduledStopUnix 118.11
254 TestRunningBinaryUpgrade 180.24
256 TestKubernetesUpgrade 247.55
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 94.47
268 TestNetworkPlugins/group/false 3.14
272 TestNoKubernetes/serial/StartWithStopK8s 77.61
273 TestNoKubernetes/serial/Start 28.63
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
275 TestNoKubernetes/serial/ProfileList 0.81
276 TestNoKubernetes/serial/Stop 1.43
277 TestNoKubernetes/serial/StartNoArgs 77.76
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
279 TestStoppedBinaryUpgrade/Setup 2.87
280 TestStoppedBinaryUpgrade/Upgrade 158.31
289 TestPause/serial/Start 65.86
290 TestNetworkPlugins/group/auto/Start 126.87
291 TestPause/serial/SecondStartNoReconfiguration 62.2
292 TestNetworkPlugins/group/kindnet/Start 71.95
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
294 TestNetworkPlugins/group/calico/Start 98.43
295 TestPause/serial/Pause 0.93
296 TestPause/serial/VerifyStatus 0.28
297 TestPause/serial/Unpause 1.08
298 TestPause/serial/PauseAgain 1.31
299 TestPause/serial/DeletePaused 1.06
300 TestPause/serial/VerifyDeletedResources 0.45
301 TestNetworkPlugins/group/custom-flannel/Start 103.11
302 TestNetworkPlugins/group/auto/KubeletFlags 0.27
303 TestNetworkPlugins/group/auto/NetCatPod 10.28
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/auto/DNS 0.16
306 TestNetworkPlugins/group/auto/Localhost 0.14
307 TestNetworkPlugins/group/auto/HairPin 0.15
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
309 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
310 TestNetworkPlugins/group/kindnet/DNS 0.24
311 TestNetworkPlugins/group/kindnet/Localhost 0.17
312 TestNetworkPlugins/group/kindnet/HairPin 0.22
313 TestNetworkPlugins/group/enable-default-cni/Start 110.26
314 TestNetworkPlugins/group/flannel/Start 109.2
315 TestNetworkPlugins/group/calico/ControllerPod 5.2
316 TestNetworkPlugins/group/calico/KubeletFlags 0.38
317 TestNetworkPlugins/group/calico/NetCatPod 11.46
318 TestNetworkPlugins/group/calico/DNS 0.19
319 TestNetworkPlugins/group/calico/Localhost 0.15
320 TestNetworkPlugins/group/calico/HairPin 0.15
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
323 TestNetworkPlugins/group/custom-flannel/DNS 0.19
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
326 TestNetworkPlugins/group/bridge/Start 96.97
328 TestStartStop/group/old-k8s-version/serial/FirstStart 196.86
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
334 TestNetworkPlugins/group/flannel/ControllerPod 6.01
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
336 TestNetworkPlugins/group/flannel/NetCatPod 10.45
338 TestStartStop/group/no-preload/serial/FirstStart 171.67
339 TestNetworkPlugins/group/flannel/DNS 0.18
340 TestNetworkPlugins/group/flannel/Localhost 0.17
341 TestNetworkPlugins/group/flannel/HairPin 0.15
343 TestStartStop/group/embed-certs/serial/FirstStart 120.31
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
345 TestNetworkPlugins/group/bridge/NetCatPod 9.25
346 TestNetworkPlugins/group/bridge/DNS 0.19
347 TestNetworkPlugins/group/bridge/Localhost 0.16
348 TestNetworkPlugins/group/bridge/HairPin 0.17
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 111.74
351 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
352 TestStartStop/group/embed-certs/serial/DeployApp 10.33
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
355 TestStartStop/group/old-k8s-version/serial/Stop 92.5
356 TestStartStop/group/embed-certs/serial/Stop 92.48
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.47
360 TestStartStop/group/no-preload/serial/DeployApp 10.31
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
362 TestStartStop/group/no-preload/serial/Stop 91.82
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/old-k8s-version/serial/SecondStart 445.33
366 TestStartStop/group/embed-certs/serial/SecondStart 336.22
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.71
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
370 TestStartStop/group/no-preload/serial/SecondStart 353.14
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
374 TestStartStop/group/embed-certs/serial/Pause 3.33
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
377 TestStartStop/group/newest-cni/serial/FirstStart 60.31
378 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.94
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/no-preload/serial/Pause 2.78
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
387 TestStartStop/group/newest-cni/serial/Stop 2.35
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
389 TestStartStop/group/newest-cni/serial/SecondStart 39.46
390 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
392 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
393 TestStartStop/group/old-k8s-version/serial/Pause 2.83
394 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
397 TestStartStop/group/newest-cni/serial/Pause 2.49
x
+
TestDownloadOnly/v1.20.0/json-events (46.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-363016 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-363016 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (46.235188974s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (46.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-363016
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-363016: exit status 85 (69.054779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:29 UTC |          |
	|         | -p download-only-363016        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 17:29:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 17:29:40.074494   12629 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:29:40.074619   12629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:29:40.074627   12629 out.go:304] Setting ErrFile to fd 2...
	I0327 17:29:40.074632   12629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:29:40.074818   12629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	W0327 17:29:40.074923   12629 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18517-5351/.minikube/config/config.json: open /home/jenkins/minikube-integration/18517-5351/.minikube/config/config.json: no such file or directory
	I0327 17:29:40.075464   12629 out.go:298] Setting JSON to true
	I0327 17:29:40.076340   12629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":714,"bootTime":1711559866,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:29:40.076405   12629 start.go:139] virtualization: kvm guest
	I0327 17:29:40.079065   12629 out.go:97] [download-only-363016] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 17:29:40.080678   12629 out.go:169] MINIKUBE_LOCATION=18517
	W0327 17:29:40.079193   12629 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 17:29:40.079251   12629 notify.go:220] Checking for updates...
	I0327 17:29:40.083597   12629 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:29:40.085138   12629 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:29:40.086612   12629 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:29:40.087977   12629 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 17:29:40.090517   12629 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 17:29:40.090784   12629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:29:40.187718   12629 out.go:97] Using the kvm2 driver based on user configuration
	I0327 17:29:40.187753   12629 start.go:297] selected driver: kvm2
	I0327 17:29:40.187763   12629 start.go:901] validating driver "kvm2" against <nil>
	I0327 17:29:40.188218   12629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:29:40.188349   12629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18517-5351/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 17:29:40.203042   12629 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 17:29:40.203091   12629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 17:29:40.203528   12629 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 17:29:40.203720   12629 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 17:29:40.203777   12629 cni.go:84] Creating CNI manager for ""
	I0327 17:29:40.203790   12629 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:29:40.203797   12629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 17:29:40.203840   12629 start.go:340] cluster config:
	{Name:download-only-363016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-363016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:29:40.203997   12629 iso.go:125] acquiring lock: {Name:mk44c6a96477688dc44b4b6d05c12d77dcc41cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:29:40.205909   12629 out.go:97] Downloading VM boot image ...
	I0327 17:29:40.205945   12629 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/iso/amd64/minikube-v1.33.0-beta.0-amd64.iso
	I0327 17:29:49.871721   12629 out.go:97] Starting "download-only-363016" primary control-plane node in "download-only-363016" cluster
	I0327 17:29:49.871758   12629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 17:29:49.979430   12629 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0327 17:29:49.979466   12629 cache.go:56] Caching tarball of preloaded images
	I0327 17:29:49.979631   12629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 17:29:49.981290   12629 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 17:29:49.981310   12629 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:29:50.093357   12629 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0327 17:30:02.318419   12629 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:02.318506   12629 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:03.215799   12629 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0327 17:30:03.216139   12629 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/download-only-363016/config.json ...
	I0327 17:30:03.216167   12629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/download-only-363016/config.json: {Name:mk7ca0a846d3e1238d1a0095f1d9dac053c9ebb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:30:03.216321   12629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0327 17:30:03.216480   12629 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-363016 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363016"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-363016
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (13.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-268880 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-268880 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.167786942s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (13.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-268880
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-268880: exit status 85 (75.459378ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:29 UTC |                     |
	|         | -p download-only-363016        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| delete  | -p download-only-363016        | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| start   | -o=json --download-only        | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC |                     |
	|         | -p download-only-268880        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 17:30:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 17:30:26.632166   12884 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:30:26.632255   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:30:26.632263   12884 out.go:304] Setting ErrFile to fd 2...
	I0327 17:30:26.632267   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:30:26.632430   12884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:30:26.632921   12884 out.go:298] Setting JSON to true
	I0327 17:30:26.633751   12884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":761,"bootTime":1711559866,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:30:26.633805   12884 start.go:139] virtualization: kvm guest
	I0327 17:30:26.635929   12884 out.go:97] [download-only-268880] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 17:30:26.637772   12884 out.go:169] MINIKUBE_LOCATION=18517
	I0327 17:30:26.636036   12884 notify.go:220] Checking for updates...
	I0327 17:30:26.640593   12884 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:30:26.641994   12884 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:30:26.643285   12884 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:30:26.644579   12884 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 17:30:26.646985   12884 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 17:30:26.647203   12884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:30:26.676891   12884 out.go:97] Using the kvm2 driver based on user configuration
	I0327 17:30:26.676914   12884 start.go:297] selected driver: kvm2
	I0327 17:30:26.676921   12884 start.go:901] validating driver "kvm2" against <nil>
	I0327 17:30:26.677220   12884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:30:26.677283   12884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18517-5351/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 17:30:26.690860   12884 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 17:30:26.690912   12884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 17:30:26.691405   12884 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 17:30:26.691525   12884 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 17:30:26.691574   12884 cni.go:84] Creating CNI manager for ""
	I0327 17:30:26.691586   12884 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:30:26.691593   12884 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 17:30:26.691648   12884 start.go:340] cluster config:
	{Name:download-only-268880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-268880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:30:26.691730   12884 iso.go:125] acquiring lock: {Name:mk44c6a96477688dc44b4b6d05c12d77dcc41cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:30:26.693263   12884 out.go:97] Starting "download-only-268880" primary control-plane node in "download-only-268880" cluster
	I0327 17:30:26.693281   12884 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 17:30:27.201343   12884 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0327 17:30:27.201375   12884 cache.go:56] Caching tarball of preloaded images
	I0327 17:30:27.201531   12884 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0327 17:30:27.203291   12884 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 17:30:27.203322   12884 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:27.309929   12884 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:dcad3363f354722395d68e96a1f5de54 -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-268880 host does not exist
	  To start a cluster, run: "minikube start -p download-only-268880"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-268880
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (47.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-774033 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-774033 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (47.398071676s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (47.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-774033
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-774033: exit status 85 (70.778351ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:29 UTC |                     |
	|         | -p download-only-363016             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| delete  | -p download-only-363016             | download-only-363016 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| start   | -o=json --download-only             | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC |                     |
	|         | -p download-only-268880             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| delete  | -p download-only-268880             | download-only-268880 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC | 27 Mar 24 17:30 UTC |
	| start   | -o=json --download-only             | download-only-774033 | jenkins | v1.33.0-beta.0 | 27 Mar 24 17:30 UTC |                     |
	|         | -p download-only-774033             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 17:30:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 17:30:40.151702   13065 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:30:40.151816   13065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:30:40.151826   13065 out.go:304] Setting ErrFile to fd 2...
	I0327 17:30:40.151833   13065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:30:40.152029   13065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:30:40.152595   13065 out.go:298] Setting JSON to true
	I0327 17:30:40.153502   13065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":774,"bootTime":1711559866,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:30:40.153565   13065 start.go:139] virtualization: kvm guest
	I0327 17:30:40.155811   13065 out.go:97] [download-only-774033] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 17:30:40.157450   13065 out.go:169] MINIKUBE_LOCATION=18517
	I0327 17:30:40.156014   13065 notify.go:220] Checking for updates...
	I0327 17:30:40.160471   13065 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:30:40.162078   13065 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:30:40.163674   13065 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:30:40.165037   13065 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 17:30:40.167572   13065 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 17:30:40.167892   13065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:30:40.199194   13065 out.go:97] Using the kvm2 driver based on user configuration
	I0327 17:30:40.199226   13065 start.go:297] selected driver: kvm2
	I0327 17:30:40.199232   13065 start.go:901] validating driver "kvm2" against <nil>
	I0327 17:30:40.199591   13065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:30:40.199686   13065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18517-5351/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 17:30:40.213682   13065 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 17:30:40.213725   13065 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 17:30:40.214255   13065 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 17:30:40.214405   13065 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 17:30:40.214461   13065 cni.go:84] Creating CNI manager for ""
	I0327 17:30:40.214474   13065 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0327 17:30:40.214481   13065 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 17:30:40.214529   13065 start.go:340] cluster config:
	{Name:download-only-774033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-774033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:30:40.214612   13065 iso.go:125] acquiring lock: {Name:mk44c6a96477688dc44b4b6d05c12d77dcc41cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 17:30:40.216242   13065 out.go:97] Starting "download-only-774033" primary control-plane node in "download-only-774033" cluster
	I0327 17:30:40.216262   13065 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 17:30:40.722849   13065 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0327 17:30:40.722892   13065 cache.go:56] Caching tarball of preloaded images
	I0327 17:30:40.723032   13065 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 17:30:40.725059   13065 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 17:30:40.725075   13065 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:40.831933   13065 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:da32f15385f98142eac11fb4e1af2dd3 -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0327 17:30:51.443657   13065 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:51.443762   13065 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18517-5351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0327 17:30:52.298421   13065 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on containerd
	I0327 17:30:52.298762   13065 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/download-only-774033/config.json ...
	I0327 17:30:52.298790   13065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/download-only-774033/config.json: {Name:mkd1bc52ac7f5cde341a273fe5d5713955a8e2a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 17:30:52.298935   13065 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0327 17:30:52.299084   13065 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18517-5351/.minikube/cache/linux/amd64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-774033 host does not exist
	  To start a cluster, run: "minikube start -p download-only-774033"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-774033
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-504745 --alsologtostderr --binary-mirror http://127.0.0.1:32787 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-504745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-504745
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (62.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-007261 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-007261 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.727327734s)
helpers_test.go:175: Cleaning up "offline-containerd-007261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-007261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-007261: (1.037254864s)
--- PASS: TestOffline (62.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-295637
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-295637: exit status 85 (61.939537ms)

                                                
                                                
-- stdout --
	* Profile "addons-295637" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295637"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-295637
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-295637: exit status 85 (63.384391ms)

                                                
                                                
-- stdout --
	* Profile "addons-295637" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295637"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (144.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-295637 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-295637 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.852128652s)
--- PASS: TestAddons/Setup (144.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 34.781975ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-96v7w" [d832ec59-db4f-49f8-9d30-8d71b9eb4114] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006736929s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6zr24" [46862e2d-ebd8-4c1e-9833-05246836fd4f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005978573s
addons_test.go:340: (dbg) Run:  kubectl --context addons-295637 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-295637 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-295637 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.851642316s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 ip
2024/03/27 17:34:09 [DEBUG] GET http://192.168.39.182:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-295637 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-295637 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-295637 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2d4085c0-fb07-44a2-8e15-a83acb458290] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2d4085c0-fb07-44a2-8e15-a83acb458290] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004044387s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-295637 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.182
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-295637 addons disable ingress-dns --alsologtostderr -v=1: (1.746038537s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-295637 addons disable ingress --alsologtostderr -v=1: (7.877303852s)
--- PASS: TestAddons/parallel/Ingress (21.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cfw5x" [644c9765-3bf8-4c8e-afe3-b7f71c78bc6a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004866261s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-295637
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-295637: (5.956309412s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 34.729778ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-9xm8s" [7cdf4bf7-3b50-44f1-8b1d-9c1aa9119f76] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005179176s
addons_test.go:415: (dbg) Run:  kubectl --context addons-295637 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.08s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.690443ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-nzj4k" [dd673c10-9d67-489b-9c54-bc28e885a8ca] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00566642s
addons_test.go:473: (dbg) Run:  kubectl --context addons-295637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-295637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.180646668s)
addons_test.go:478: kubectl --context addons-295637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-295637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-295637 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.436866009s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 50.416777ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-295637 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-295637 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d0156c95-5ac4-4ac0-9d2e-ab0e0c301e54] Pending
helpers_test.go:344: "task-pv-pod" [d0156c95-5ac4-4ac0-9d2e-ab0e0c301e54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d0156c95-5ac4-4ac0-9d2e-ab0e0c301e54] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.006677086s
addons_test.go:584: (dbg) Run:  kubectl --context addons-295637 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-295637 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-295637 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-295637 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-295637 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-295637 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-295637 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5c441167-a02f-4dd8-a721-2f765341cecf] Pending
helpers_test.go:344: "task-pv-pod-restore" [5c441167-a02f-4dd8-a721-2f765341cecf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5c441167-a02f-4dd8-a721-2f765341cecf] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0050428s
addons_test.go:626: (dbg) Run:  kubectl --context addons-295637 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-295637 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-295637 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-295637 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.113088297s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-b9vh8" [b2aaf2cc-d0c2-4070-aa62-67f9a5523a7e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004071377s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-295637
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-295637 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-295637 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [313c39ec-9642-430c-9045-101800abd4c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [313c39ec-9642-430c-9045-101800abd4c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [313c39ec-9642-430c-9045-101800abd4c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005662178s
addons_test.go:891: (dbg) Run:  kubectl --context addons-295637 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 ssh "cat /opt/local-path-provisioner/pvc-b157adfc-a620-496f-a31a-3bfb029d1256_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-295637 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-295637 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-295637 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-295637 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.550054939s)
--- PASS: TestAddons/parallel/LocalPath (56.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t9c2x" [d9323477-3ff2-48a8-b533-d6982941056b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005370588s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-295637
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gvvf2" [b0f18d59-ac62-4f6f-bb19-f849ee2083a6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.016960528s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-295637 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-295637 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-295637
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-295637: (1m32.436922805s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-295637
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-295637
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-295637
--- PASS: TestAddons/StoppedEnableDisable (92.73s)

                                                
                                    
x
+
TestCertOptions (101.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-623604 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-623604 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m40.135971895s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-623604 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-623604 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-623604 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-623604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-623604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-623604: (1.183403487s)
--- PASS: TestCertOptions (101.81s)

                                                
                                    
x
+
TestCertExpiration (344.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415239 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415239 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m47.428411728s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415239 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415239 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (56.469351263s)
helpers_test.go:175: Cleaning up "cert-expiration-415239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-415239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-415239: (1.024926317s)
--- PASS: TestCertExpiration (344.92s)

                                                
                                    
x
+
TestForceSystemdFlag (85.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-895051 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0327 18:33:53.677764   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-895051 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m24.009668546s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-895051 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-895051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-895051
--- PASS: TestForceSystemdFlag (85.03s)

                                                
                                    
x
+
TestForceSystemdEnv (72.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-038736 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-038736 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m10.912928338s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-038736 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-038736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-038736
--- PASS: TestForceSystemdEnv (72.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.10s)

                                                
                                    
x
+
TestErrorSpam/setup (46.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-431850 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-431850 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-431850 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-431850 --driver=kvm2  --container-runtime=containerd: (46.847314394s)
--- PASS: TestErrorSpam/setup (46.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (4.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop: (1.617683657s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop: (1.164269051s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-431850 --log_dir /tmp/nospam-431850 stop: (1.819331345s)
--- PASS: TestErrorSpam/stop (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18517-5351/.minikube/files/etc/test/nested/copy/12617/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0327 17:38:53.679254   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.684997   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.695249   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.715539   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.755832   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.836164   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:53.996554   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:54.317114   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:54.958046   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:56.238550   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:38:58.800371   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:39:03.920972   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:39:14.162174   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-351970 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.421788784s)
--- PASS: TestFunctional/serial/StartWithProxy (99.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --alsologtostderr -v=8
E0327 17:39:34.643360   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-351970 --alsologtostderr -v=8: (26.573055648s)
functional_test.go:659: soft start took 26.573686084s for "functional-351970" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-351970 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:3.1: (1.175425596s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:3.3: (1.388396769s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 cache add registry.k8s.io/pause:latest: (1.130425432s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-351970 /tmp/TestFunctionalserialCacheCmdcacheadd_local1753465711/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache add minikube-local-cache-test:functional-351970
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 cache add minikube-local-cache-test:functional-351970: (2.197256278s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache delete minikube-local-cache-test:functional-351970
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-351970
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.044306ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 cache reload: (1.129095003s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 kubectl -- --context functional-351970 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-351970 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0327 17:40:15.604568   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-351970 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.463024933s)
functional_test.go:757: restart took 50.4631316s for "functional-351970" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-351970 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 logs: (1.484099871s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 logs --file /tmp/TestFunctionalserialLogsFileCmd610498790/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 logs --file /tmp/TestFunctionalserialLogsFileCmd610498790/001/logs.txt: (1.44576737s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-351970 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-351970
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-351970: exit status 115 (284.276642ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.114:31605 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-351970 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 config get cpus: exit status 14 (55.535662ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 config get cpus: exit status 14 (59.964695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351970 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351970 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21086: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-351970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (151.489525ms)

                                                
                                                
-- stdout --
	* [functional-351970] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 17:41:56.736158   20720 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:41:56.736242   20720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:41:56.736249   20720 out.go:304] Setting ErrFile to fd 2...
	I0327 17:41:56.736253   20720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:41:56.736446   20720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:41:56.736923   20720 out.go:298] Setting JSON to false
	I0327 17:41:56.737884   20720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1451,"bootTime":1711559866,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:41:56.737943   20720 start.go:139] virtualization: kvm guest
	I0327 17:41:56.740266   20720 out.go:177] * [functional-351970] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 17:41:56.741590   20720 notify.go:220] Checking for updates...
	I0327 17:41:56.741592   20720 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 17:41:56.743110   20720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:41:56.744508   20720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:41:56.745822   20720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:41:56.747102   20720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 17:41:56.748436   20720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 17:41:56.750149   20720 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:41:56.750646   20720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:41:56.750696   20720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:41:56.767417   20720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0327 17:41:56.767822   20720 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:41:56.768653   20720 main.go:141] libmachine: Using API Version  1
	I0327 17:41:56.768695   20720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:41:56.769032   20720 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:41:56.769213   20720 main.go:141] libmachine: (functional-351970) Calling .DriverName
	I0327 17:41:56.769474   20720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:41:56.769773   20720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:41:56.769812   20720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:41:56.785768   20720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0327 17:41:56.786284   20720 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:41:56.786792   20720 main.go:141] libmachine: Using API Version  1
	I0327 17:41:56.786822   20720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:41:56.787218   20720 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:41:56.787402   20720 main.go:141] libmachine: (functional-351970) Calling .DriverName
	I0327 17:41:56.822218   20720 out.go:177] * Using the kvm2 driver based on existing profile
	I0327 17:41:56.823536   20720 start.go:297] selected driver: kvm2
	I0327 17:41:56.823556   20720 start.go:901] validating driver "kvm2" against &{Name:functional-351970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functio
nal-351970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/
jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:41:56.823639   20720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 17:41:56.825596   20720 out.go:177] 
	W0327 17:41:56.826819   20720 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 17:41:56.828016   20720 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-351970 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (163.664209ms)

                                                
                                                
-- stdout --
	* [functional-351970] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 17:41:56.584681   20643 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:41:56.585241   20643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:41:56.585252   20643 out.go:304] Setting ErrFile to fd 2...
	I0327 17:41:56.585257   20643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:41:56.585561   20643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:41:56.586109   20643 out.go:298] Setting JSON to false
	I0327 17:41:56.587326   20643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1451,"bootTime":1711559866,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 17:41:56.587401   20643 start.go:139] virtualization: kvm guest
	I0327 17:41:56.589878   20643 out.go:177] * [functional-351970] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0327 17:41:56.591207   20643 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 17:41:56.591285   20643 notify.go:220] Checking for updates...
	I0327 17:41:56.592519   20643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 17:41:56.594059   20643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 17:41:56.595282   20643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 17:41:56.596522   20643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 17:41:56.597778   20643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 17:41:56.599570   20643 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:41:56.600150   20643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:41:56.600219   20643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:41:56.621009   20643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0327 17:41:56.621460   20643 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:41:56.622023   20643 main.go:141] libmachine: Using API Version  1
	I0327 17:41:56.622076   20643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:41:56.622391   20643 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:41:56.622548   20643 main.go:141] libmachine: (functional-351970) Calling .DriverName
	I0327 17:41:56.622785   20643 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 17:41:56.623124   20643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:41:56.623163   20643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:41:56.636980   20643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0327 17:41:56.637328   20643 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:41:56.637773   20643 main.go:141] libmachine: Using API Version  1
	I0327 17:41:56.637797   20643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:41:56.638077   20643 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:41:56.638270   20643 main.go:141] libmachine: (functional-351970) Calling .DriverName
	I0327 17:41:56.670908   20643 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0327 17:41:56.672117   20643 start.go:297] selected driver: kvm2
	I0327 17:41:56.672134   20643 start.go:901] validating driver "kvm2" against &{Name:functional-351970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-beta.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functio
nal-351970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/
jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 17:41:56.672279   20643 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 17:41:56.674684   20643 out.go:177] 
	W0327 17:41:56.676088   20643 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 17:41:56.677253   20643 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (64.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-351970 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-351970 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4mbmm" [5144643d-8c4e-4b56-b9aa-3144d1486490] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "hello-node-connect-55497b8b78-4mbmm" [5144643d-8c4e-4b56-b9aa-3144d1486490] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-4mbmm" [5144643d-8c4e-4b56-b9aa-3144d1486490] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-4mbmm" [5144643d-8c4e-4b56-b9aa-3144d1486490] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m4.007075962s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.114:32260
functional_test.go:1671: http://192.168.39.114:32260: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-4mbmm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.114:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.114:32260
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (64.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (67.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7289d1a9-9f35-433d-b7b3-4ec9fa0ceb0a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004029551s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-351970 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-351970 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351970 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351970 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0005242-5712-4462-85eb-5860b9cc0d67] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [c0005242-5712-4462-85eb-5860b9cc0d67] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [c0005242-5712-4462-85eb-5860b9cc0d67] Pending
E0327 17:41:37.525643   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [c0005242-5712-4462-85eb-5860b9cc0d67] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c0005242-5712-4462-85eb-5860b9cc0d67] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.008980011s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-351970 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-351970 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351970 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fae94492-0950-442f-9299-3b47c5ea3dd4] Pending
helpers_test.go:344: "sp-pod" [fae94492-0950-442f-9299-3b47c5ea3dd4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fae94492-0950-442f-9299-3b47c5ea3dd4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.017960023s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-351970 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (67.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh -n functional-351970 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cp functional-351970:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1377796546/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh -n functional-351970 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh -n functional-351970 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-351970 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-7png7" [d2757f6e-becb-49a1-a277-1f71d698c998] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-7png7" [d2757f6e-becb-49a1-a277-1f71d698c998] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004094877s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;": exit status 1 (169.327344ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;": exit status 1 (199.311159ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;": exit status 1 (146.305893ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;": exit status 1 (148.230814ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-351970 exec mysql-859648c796-7png7 -- mysql -ppassword -e "show databases;"
2024/03/27 17:42:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12617/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /etc/test/nested/copy/12617/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12617.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /etc/ssl/certs/12617.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12617.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /usr/share/ca-certificates/12617.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/126172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /etc/ssl/certs/126172.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/126172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /usr/share/ca-certificates/126172.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-351970 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "sudo systemctl is-active docker": exit status 1 (251.615765ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "sudo systemctl is-active crio": exit status 1 (213.255205ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351970 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-351970
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-351970
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351970 image ls --format short --alsologtostderr:
I0327 17:41:58.467400   21062 out.go:291] Setting OutFile to fd 1 ...
I0327 17:41:58.467511   21062 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:58.467522   21062 out.go:304] Setting ErrFile to fd 2...
I0327 17:41:58.467528   21062 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:58.467731   21062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
I0327 17:41:58.468276   21062 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:58.468395   21062 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:58.468790   21062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:58.468836   21062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:58.483731   21062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
I0327 17:41:58.484247   21062 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:58.484980   21062 main.go:141] libmachine: Using API Version  1
I0327 17:41:58.485013   21062 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:58.485483   21062 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:58.485705   21062 main.go:141] libmachine: (functional-351970) Calling .GetState
I0327 17:41:58.487542   21062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:58.487581   21062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:58.501770   21062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
I0327 17:41:58.502185   21062 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:58.502640   21062 main.go:141] libmachine: Using API Version  1
I0327 17:41:58.502663   21062 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:58.502984   21062 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:58.503165   21062 main.go:141] libmachine: (functional-351970) Calling .DriverName
I0327 17:41:58.503387   21062 ssh_runner.go:195] Run: systemctl --version
I0327 17:41:58.503414   21062 main.go:141] libmachine: (functional-351970) Calling .GetSSHHostname
I0327 17:41:58.505984   21062 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:58.506381   21062 main.go:141] libmachine: (functional-351970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:ea:4d", ip: ""} in network mk-functional-351970: {Iface:virbr1 ExpiryTime:2024-03-27 18:37:54 +0000 UTC Type:0 Mac:52:54:00:db:ea:4d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-351970 Clientid:01:52:54:00:db:ea:4d}
I0327 17:41:58.506406   21062 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined IP address 192.168.39.114 and MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:58.506528   21062 main.go:141] libmachine: (functional-351970) Calling .GetSSHPort
I0327 17:41:58.506683   21062 main.go:141] libmachine: (functional-351970) Calling .GetSSHKeyPath
I0327 17:41:58.506833   21062 main.go:141] libmachine: (functional-351970) Calling .GetSSHUsername
I0327 17:41:58.506962   21062 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/functional-351970/id_rsa Username:docker}
I0327 17:41:58.607937   21062 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 17:41:58.744441   21062 main.go:141] libmachine: Making call to close driver server
I0327 17:41:58.744455   21062 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:58.744739   21062 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:58.744763   21062 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 17:41:58.744780   21062 main.go:141] libmachine: Making call to close driver server
I0327 17:41:58.744792   21062 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:58.744989   21062 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:58.745004   21062 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351970 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-351970  | sha256:ac69c7 | 988B   |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| gcr.io/google-containers/addon-resizer      | functional-351970  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351970 image ls --format table --alsologtostderr:
I0327 17:41:59.249337   21193 out.go:291] Setting OutFile to fd 1 ...
I0327 17:41:59.249601   21193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.249612   21193 out.go:304] Setting ErrFile to fd 2...
I0327 17:41:59.249616   21193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.249852   21193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
I0327 17:41:59.250421   21193 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.250539   21193 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.250929   21193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.250981   21193 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.265478   21193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
I0327 17:41:59.265882   21193 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.266525   21193 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.266558   21193 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.266950   21193 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.267154   21193 main.go:141] libmachine: (functional-351970) Calling .GetState
I0327 17:41:59.269100   21193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.269146   21193 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.283422   21193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
I0327 17:41:59.283871   21193 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.284290   21193 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.284309   21193 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.284663   21193 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.284810   21193 main.go:141] libmachine: (functional-351970) Calling .DriverName
I0327 17:41:59.284975   21193 ssh_runner.go:195] Run: systemctl --version
I0327 17:41:59.284999   21193 main.go:141] libmachine: (functional-351970) Calling .GetSSHHostname
I0327 17:41:59.287752   21193 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.288119   21193 main.go:141] libmachine: (functional-351970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:ea:4d", ip: ""} in network mk-functional-351970: {Iface:virbr1 ExpiryTime:2024-03-27 18:37:54 +0000 UTC Type:0 Mac:52:54:00:db:ea:4d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-351970 Clientid:01:52:54:00:db:ea:4d}
I0327 17:41:59.288156   21193 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined IP address 192.168.39.114 and MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.288266   21193 main.go:141] libmachine: (functional-351970) Calling .GetSSHPort
I0327 17:41:59.288417   21193 main.go:141] libmachine: (functional-351970) Calling .GetSSHKeyPath
I0327 17:41:59.288575   21193 main.go:141] libmachine: (functional-351970) Calling .GetSSHUsername
I0327 17:41:59.288682   21193 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/functional-351970/id_rsa Username:docker}
I0327 17:41:59.374448   21193 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 17:41:59.428263   21193 main.go:141] libmachine: Making call to close driver server
I0327 17:41:59.428288   21193 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:59.428537   21193 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:59.428582   21193 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:41:59.428597   21193 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 17:41:59.428612   21193 main.go:141] libmachine: Making call to close driver server
I0327 17:41:59.428625   21193 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:59.428894   21193 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:41:59.428898   21193 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:59.428919   21193 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351970 image ls --format json --alsologtostderr:
[{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-351970"],"size":"10823156"},{"id":
"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:56cc512116c8f894f11ce1995460ae
f1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:ac69c75ceb690a8a5b6bb48470222c2394d6313546262d7e5f0e3de7c12589b2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-351970"],"size":"988"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924
b9312e","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351970 image ls --format json --alsologtostderr:
I0327 17:41:59.000475   21148 out.go:291] Setting OutFile to fd 1 ...
I0327 17:41:59.000572   21148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.000580   21148 out.go:304] Setting ErrFile to fd 2...
I0327 17:41:59.000584   21148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.000783   21148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
I0327 17:41:59.001363   21148 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.001513   21148 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.002008   21148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.002064   21148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.017667   21148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
I0327 17:41:59.018190   21148 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.018852   21148 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.018882   21148 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.019291   21148 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.019508   21148 main.go:141] libmachine: (functional-351970) Calling .GetState
I0327 17:41:59.021442   21148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.021481   21148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.039779   21148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35569
I0327 17:41:59.040266   21148 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.040765   21148 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.040788   21148 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.041187   21148 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.041362   21148 main.go:141] libmachine: (functional-351970) Calling .DriverName
I0327 17:41:59.041565   21148 ssh_runner.go:195] Run: systemctl --version
I0327 17:41:59.041596   21148 main.go:141] libmachine: (functional-351970) Calling .GetSSHHostname
I0327 17:41:59.044506   21148 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.044928   21148 main.go:141] libmachine: (functional-351970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:ea:4d", ip: ""} in network mk-functional-351970: {Iface:virbr1 ExpiryTime:2024-03-27 18:37:54 +0000 UTC Type:0 Mac:52:54:00:db:ea:4d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-351970 Clientid:01:52:54:00:db:ea:4d}
I0327 17:41:59.044961   21148 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined IP address 192.168.39.114 and MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.045081   21148 main.go:141] libmachine: (functional-351970) Calling .GetSSHPort
I0327 17:41:59.045250   21148 main.go:141] libmachine: (functional-351970) Calling .GetSSHKeyPath
I0327 17:41:59.045389   21148 main.go:141] libmachine: (functional-351970) Calling .GetSSHUsername
I0327 17:41:59.045617   21148 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/functional-351970/id_rsa Username:docker}
I0327 17:41:59.142299   21148 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 17:41:59.192132   21148 main.go:141] libmachine: Making call to close driver server
I0327 17:41:59.192174   21148 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:59.192452   21148 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:59.192478   21148 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 17:41:59.192488   21148 main.go:141] libmachine: Making call to close driver server
I0327 17:41:59.192496   21148 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:59.192755   21148 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:41:59.192815   21148 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:59.192823   21148 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351970 image ls --format yaml --alsologtostderr:
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:ac69c75ceb690a8a5b6bb48470222c2394d6313546262d7e5f0e3de7c12589b2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-351970
size: "988"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-351970
size: "10823156"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "70534964"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351970 image ls --format yaml --alsologtostderr:
I0327 17:41:58.704057   21095 out.go:291] Setting OutFile to fd 1 ...
I0327 17:41:58.704281   21095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:58.704289   21095 out.go:304] Setting ErrFile to fd 2...
I0327 17:41:58.704293   21095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:58.706072   21095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
I0327 17:41:58.706894   21095 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:58.707045   21095 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:58.707418   21095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:58.707459   21095 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:58.722491   21095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
I0327 17:41:58.722978   21095 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:58.723520   21095 main.go:141] libmachine: Using API Version  1
I0327 17:41:58.723559   21095 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:58.723884   21095 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:58.724050   21095 main.go:141] libmachine: (functional-351970) Calling .GetState
I0327 17:41:58.725924   21095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:58.725960   21095 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:58.740580   21095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
I0327 17:41:58.740969   21095 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:58.741407   21095 main.go:141] libmachine: Using API Version  1
I0327 17:41:58.741436   21095 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:58.741804   21095 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:58.742028   21095 main.go:141] libmachine: (functional-351970) Calling .DriverName
I0327 17:41:58.742257   21095 ssh_runner.go:195] Run: systemctl --version
I0327 17:41:58.742283   21095 main.go:141] libmachine: (functional-351970) Calling .GetSSHHostname
I0327 17:41:58.746133   21095 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:58.746626   21095 main.go:141] libmachine: (functional-351970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:ea:4d", ip: ""} in network mk-functional-351970: {Iface:virbr1 ExpiryTime:2024-03-27 18:37:54 +0000 UTC Type:0 Mac:52:54:00:db:ea:4d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-351970 Clientid:01:52:54:00:db:ea:4d}
I0327 17:41:58.746668   21095 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined IP address 192.168.39.114 and MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:58.746972   21095 main.go:141] libmachine: (functional-351970) Calling .GetSSHPort
I0327 17:41:58.747140   21095 main.go:141] libmachine: (functional-351970) Calling .GetSSHKeyPath
I0327 17:41:58.747331   21095 main.go:141] libmachine: (functional-351970) Calling .GetSSHUsername
I0327 17:41:58.747487   21095 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/functional-351970/id_rsa Username:docker}
I0327 17:41:58.856956   21095 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 17:41:58.936538   21095 main.go:141] libmachine: Making call to close driver server
I0327 17:41:58.936552   21095 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:58.936831   21095 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:58.936847   21095 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 17:41:58.936874   21095 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:41:58.936927   21095 main.go:141] libmachine: Making call to close driver server
I0327 17:41:58.936938   21095 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:41:58.937171   21095 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:41:58.937198   21095 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:41:58.937207   21095 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh pgrep buildkitd: exit status 1 (252.542091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image build -t localhost/my-image:functional-351970 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image build -t localhost/my-image:functional-351970 testdata/build --alsologtostderr: (4.231580839s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351970 image build -t localhost/my-image:functional-351970 testdata/build --alsologtostderr:
I0327 17:41:59.060375   21160 out.go:291] Setting OutFile to fd 1 ...
I0327 17:41:59.060522   21160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.060535   21160 out.go:304] Setting ErrFile to fd 2...
I0327 17:41:59.060542   21160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 17:41:59.060712   21160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
I0327 17:41:59.061275   21160 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.061860   21160 config.go:182] Loaded profile config "functional-351970": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0327 17:41:59.062676   21160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.062759   21160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.078578   21160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45061
I0327 17:41:59.078983   21160 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.079483   21160 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.079504   21160 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.079836   21160 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.080013   21160 main.go:141] libmachine: (functional-351970) Calling .GetState
I0327 17:41:59.081731   21160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0327 17:41:59.081775   21160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 17:41:59.095511   21160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
I0327 17:41:59.095882   21160 main.go:141] libmachine: () Calling .GetVersion
I0327 17:41:59.096377   21160 main.go:141] libmachine: Using API Version  1
I0327 17:41:59.096402   21160 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 17:41:59.096693   21160 main.go:141] libmachine: () Calling .GetMachineName
I0327 17:41:59.096845   21160 main.go:141] libmachine: (functional-351970) Calling .DriverName
I0327 17:41:59.097040   21160 ssh_runner.go:195] Run: systemctl --version
I0327 17:41:59.097060   21160 main.go:141] libmachine: (functional-351970) Calling .GetSSHHostname
I0327 17:41:59.099488   21160 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.099821   21160 main.go:141] libmachine: (functional-351970) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:ea:4d", ip: ""} in network mk-functional-351970: {Iface:virbr1 ExpiryTime:2024-03-27 18:37:54 +0000 UTC Type:0 Mac:52:54:00:db:ea:4d Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-351970 Clientid:01:52:54:00:db:ea:4d}
I0327 17:41:59.099850   21160 main.go:141] libmachine: (functional-351970) DBG | domain functional-351970 has defined IP address 192.168.39.114 and MAC address 52:54:00:db:ea:4d in network mk-functional-351970
I0327 17:41:59.099995   21160 main.go:141] libmachine: (functional-351970) Calling .GetSSHPort
I0327 17:41:59.100166   21160 main.go:141] libmachine: (functional-351970) Calling .GetSSHKeyPath
I0327 17:41:59.100317   21160 main.go:141] libmachine: (functional-351970) Calling .GetSSHUsername
I0327 17:41:59.100459   21160 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/functional-351970/id_rsa Username:docker}
I0327 17:41:59.184787   21160 build_images.go:161] Building image from path: /tmp/build.2101384241.tar
I0327 17:41:59.184844   21160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 17:41:59.216611   21160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2101384241.tar
I0327 17:41:59.222392   21160 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2101384241.tar: stat -c "%s %y" /var/lib/minikube/build/build.2101384241.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2101384241.tar': No such file or directory
I0327 17:41:59.222425   21160 ssh_runner.go:362] scp /tmp/build.2101384241.tar --> /var/lib/minikube/build/build.2101384241.tar (3072 bytes)
I0327 17:41:59.261729   21160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2101384241
I0327 17:41:59.276241   21160 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2101384241 -xf /var/lib/minikube/build/build.2101384241.tar
I0327 17:41:59.289786   21160 containerd.go:394] Building image: /var/lib/minikube/build/build.2101384241
I0327 17:41:59.289855   21160 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2101384241 --local dockerfile=/var/lib/minikube/build/build.2101384241 --output type=image,name=localhost/my-image:functional-351970
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.9s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:0e7b7e3d34437214a5abb22e8e2dcb5ab4269de404c1943b503b3dd94e8b1cbe 0.0s done
#8 exporting config sha256:35827ad8afddfff31db00a9da9df8c1311f19279198f25ad86e4d24db2ab92ee 0.0s done
#8 naming to localhost/my-image:functional-351970 done
#8 DONE 0.2s
I0327 17:42:03.192276   21160 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2101384241 --local dockerfile=/var/lib/minikube/build/build.2101384241 --output type=image,name=localhost/my-image:functional-351970: (3.902392233s)
I0327 17:42:03.192360   21160 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2101384241
I0327 17:42:03.215990   21160 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2101384241.tar
I0327 17:42:03.230185   21160 build_images.go:217] Built localhost/my-image:functional-351970 from /tmp/build.2101384241.tar
I0327 17:42:03.230218   21160 build_images.go:133] succeeded building to: functional-351970
I0327 17:42:03.230224   21160 build_images.go:134] failed building to: 
I0327 17:42:03.230249   21160 main.go:141] libmachine: Making call to close driver server
I0327 17:42:03.230261   21160 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:42:03.230520   21160 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:42:03.230543   21160 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 17:42:03.230552   21160 main.go:141] libmachine: Making call to close driver server
I0327 17:42:03.230559   21160 main.go:141] libmachine: (functional-351970) Calling .Close
I0327 17:42:03.230570   21160 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:42:03.230871   21160 main.go:141] libmachine: (functional-351970) DBG | Closing plugin on server side
I0327 17:42:03.230898   21160 main.go:141] libmachine: Successfully made call to close driver server
I0327 17:42:03.230921   21160 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.116843117s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-351970
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (63.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-351970 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-351970 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-btflz" [03241c12-c8e5-4646-847b-d15b6a5fa535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "hello-node-d7447cc7f-btflz" [03241c12-c8e5-4646-847b-d15b6a5fa535] Pending
helpers_test.go:344: "hello-node-d7447cc7f-btflz" [03241c12-c8e5-4646-847b-d15b6a5fa535] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-btflz" [03241c12-c8e5-4646-847b-d15b6a5fa535] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m3.004653742s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (63.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr: (3.880246291s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr: (2.556522675s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.968848137s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-351970
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image load --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr: (3.739476102s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image save gcr.io/google-containers/addon-resizer:functional-351970 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image rm gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.28942387s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-351970
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 image save --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-351970 image save --daemon gcr.io/google-containers/addon-resizer:functional-351970 --alsologtostderr: (1.004150179s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-351970
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "216.139385ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "56.359194ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "230.867678ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "54.641026ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (43.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdany-port4104785684/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711561270258986522" to /tmp/TestFunctionalparallelMountCmdany-port4104785684/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711561270258986522" to /tmp/TestFunctionalparallelMountCmdany-port4104785684/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711561270258986522" to /tmp/TestFunctionalparallelMountCmdany-port4104785684/001/test-1711561270258986522
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.729371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 27 17:41 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 27 17:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 27 17:41 test-1711561270258986522
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh cat /mount-9p/test-1711561270258986522
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-351970 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Pending
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Pending
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Running
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [33e88382-ca72-48bc-b763-783e1a842862] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 41.005225242s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-351970 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdany-port4104785684/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (43.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdspecific-port2572421989/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.532155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdspecific-port2572421989/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "sudo umount -f /mount-9p": exit status 1 (203.894831ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-351970 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdspecific-port2572421989/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service list -o json
functional_test.go:1490: Took "323.897823ms" to run "out/minikube-linux-amd64 -p functional-351970 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T" /mount1: exit status 1 (266.428157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-351970 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351970 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3475947966/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.114:31464
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.114:31464
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-351970 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-351970
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-351970
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-351970
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (277.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-879451 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 17:43:53.678372   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:44:21.366522   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:45:51.054142   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.060145   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.071120   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.091364   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.131615   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.211926   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.372380   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:51.693181   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:52.334045   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:53.615108   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:45:56.175847   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:46:01.296213   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:46:11.537315   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:46:32.017543   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-879451 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m36.326785916s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (277.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-879451 -- rollout status deployment/busybox: (3.905050012s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-b2vb2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-kv494 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-lf5mr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-b2vb2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-kv494 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-lf5mr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-b2vb2 -- nslookup kubernetes.default.svc.cluster.local
E0327 17:47:12.977755   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-kv494 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-lf5mr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-b2vb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-b2vb2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-kv494 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-kv494 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-lf5mr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-879451 -- exec busybox-7fdf7869d9-lf5mr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-879451 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-879451 -v=7 --alsologtostderr: (46.912087767s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-879451 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp testdata/cp-test.txt ha-879451:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2969831409/001/cp-test_ha-879451.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451:/home/docker/cp-test.txt ha-879451-m02:/home/docker/cp-test_ha-879451_ha-879451-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test_ha-879451_ha-879451-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451:/home/docker/cp-test.txt ha-879451-m03:/home/docker/cp-test_ha-879451_ha-879451-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test_ha-879451_ha-879451-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451:/home/docker/cp-test.txt ha-879451-m04:/home/docker/cp-test_ha-879451_ha-879451-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test_ha-879451_ha-879451-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp testdata/cp-test.txt ha-879451-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2969831409/001/cp-test_ha-879451-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m02:/home/docker/cp-test.txt ha-879451:/home/docker/cp-test_ha-879451-m02_ha-879451.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test_ha-879451-m02_ha-879451.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m02:/home/docker/cp-test.txt ha-879451-m03:/home/docker/cp-test_ha-879451-m02_ha-879451-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test_ha-879451-m02_ha-879451-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m02:/home/docker/cp-test.txt ha-879451-m04:/home/docker/cp-test_ha-879451-m02_ha-879451-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test_ha-879451-m02_ha-879451-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp testdata/cp-test.txt ha-879451-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2969831409/001/cp-test_ha-879451-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m03:/home/docker/cp-test.txt ha-879451:/home/docker/cp-test_ha-879451-m03_ha-879451.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test_ha-879451-m03_ha-879451.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m03:/home/docker/cp-test.txt ha-879451-m02:/home/docker/cp-test_ha-879451-m03_ha-879451-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test_ha-879451-m03_ha-879451-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m03:/home/docker/cp-test.txt ha-879451-m04:/home/docker/cp-test_ha-879451-m03_ha-879451-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test_ha-879451-m03_ha-879451-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp testdata/cp-test.txt ha-879451-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2969831409/001/cp-test_ha-879451-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m04:/home/docker/cp-test.txt ha-879451:/home/docker/cp-test_ha-879451-m04_ha-879451.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451 "sudo cat /home/docker/cp-test_ha-879451-m04_ha-879451.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m04:/home/docker/cp-test.txt ha-879451-m02:/home/docker/cp-test_ha-879451-m04_ha-879451-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m02 "sudo cat /home/docker/cp-test_ha-879451-m04_ha-879451-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 cp ha-879451-m04:/home/docker/cp-test.txt ha-879451-m03:/home/docker/cp-test_ha-879451-m04_ha-879451-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 ssh -n ha-879451-m03 "sudo cat /home/docker/cp-test_ha-879451-m04_ha-879451-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 node stop m02 -v=7 --alsologtostderr
E0327 17:48:34.898619   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:48:53.677564   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-879451 node stop m02 -v=7 --alsologtostderr: (1m32.453030785s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr: exit status 7 (693.437881ms)

                                                
                                                
-- stdout --
	ha-879451
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-879451-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879451-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-879451-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 17:49:49.369749   25711 out.go:291] Setting OutFile to fd 1 ...
	I0327 17:49:49.369997   25711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:49:49.370009   25711 out.go:304] Setting ErrFile to fd 2...
	I0327 17:49:49.370015   25711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 17:49:49.370207   25711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 17:49:49.370379   25711 out.go:298] Setting JSON to false
	I0327 17:49:49.370410   25711 mustload.go:65] Loading cluster: ha-879451
	I0327 17:49:49.370465   25711 notify.go:220] Checking for updates...
	I0327 17:49:49.370795   25711 config.go:182] Loaded profile config "ha-879451": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 17:49:49.370811   25711 status.go:255] checking status of ha-879451 ...
	I0327 17:49:49.371230   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.371281   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.387356   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0327 17:49:49.387782   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.388362   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.388394   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.388705   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.388876   25711 main.go:141] libmachine: (ha-879451) Calling .GetState
	I0327 17:49:49.390615   25711 status.go:330] ha-879451 host status = "Running" (err=<nil>)
	I0327 17:49:49.390632   25711 host.go:66] Checking if "ha-879451" exists ...
	I0327 17:49:49.391071   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.391118   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.405340   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0327 17:49:49.405764   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.406194   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.406213   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.406589   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.406776   25711 main.go:141] libmachine: (ha-879451) Calling .GetIP
	I0327 17:49:49.409502   25711 main.go:141] libmachine: (ha-879451) DBG | domain ha-879451 has defined MAC address 52:54:00:af:30:f1 in network mk-ha-879451
	I0327 17:49:49.409944   25711 main.go:141] libmachine: (ha-879451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:30:f1", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:42:45 +0000 UTC Type:0 Mac:52:54:00:af:30:f1 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-879451 Clientid:01:52:54:00:af:30:f1}
	I0327 17:49:49.409984   25711 main.go:141] libmachine: (ha-879451) DBG | domain ha-879451 has defined IP address 192.168.39.200 and MAC address 52:54:00:af:30:f1 in network mk-ha-879451
	I0327 17:49:49.410135   25711 host.go:66] Checking if "ha-879451" exists ...
	I0327 17:49:49.410427   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.410471   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.425111   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0327 17:49:49.425545   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.426037   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.426060   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.426350   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.426554   25711 main.go:141] libmachine: (ha-879451) Calling .DriverName
	I0327 17:49:49.426731   25711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 17:49:49.426750   25711 main.go:141] libmachine: (ha-879451) Calling .GetSSHHostname
	I0327 17:49:49.429612   25711 main.go:141] libmachine: (ha-879451) DBG | domain ha-879451 has defined MAC address 52:54:00:af:30:f1 in network mk-ha-879451
	I0327 17:49:49.430029   25711 main.go:141] libmachine: (ha-879451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:30:f1", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:42:45 +0000 UTC Type:0 Mac:52:54:00:af:30:f1 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-879451 Clientid:01:52:54:00:af:30:f1}
	I0327 17:49:49.430063   25711 main.go:141] libmachine: (ha-879451) DBG | domain ha-879451 has defined IP address 192.168.39.200 and MAC address 52:54:00:af:30:f1 in network mk-ha-879451
	I0327 17:49:49.430157   25711 main.go:141] libmachine: (ha-879451) Calling .GetSSHPort
	I0327 17:49:49.430365   25711 main.go:141] libmachine: (ha-879451) Calling .GetSSHKeyPath
	I0327 17:49:49.430547   25711 main.go:141] libmachine: (ha-879451) Calling .GetSSHUsername
	I0327 17:49:49.430741   25711 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/ha-879451/id_rsa Username:docker}
	I0327 17:49:49.523976   25711 ssh_runner.go:195] Run: systemctl --version
	I0327 17:49:49.532216   25711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 17:49:49.553684   25711 kubeconfig.go:125] found "ha-879451" server: "https://192.168.39.254:8443"
	I0327 17:49:49.553706   25711 api_server.go:166] Checking apiserver status ...
	I0327 17:49:49.553740   25711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 17:49:49.578281   25711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W0327 17:49:49.604912   25711 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 17:49:49.604959   25711 ssh_runner.go:195] Run: ls
	I0327 17:49:49.610407   25711 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0327 17:49:49.614697   25711 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0327 17:49:49.614716   25711 status.go:422] ha-879451 apiserver status = Running (err=<nil>)
	I0327 17:49:49.614725   25711 status.go:257] ha-879451 status: &{Name:ha-879451 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 17:49:49.614746   25711 status.go:255] checking status of ha-879451-m02 ...
	I0327 17:49:49.615033   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.615072   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.631357   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I0327 17:49:49.631755   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.632203   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.632228   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.632522   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.632678   25711 main.go:141] libmachine: (ha-879451-m02) Calling .GetState
	I0327 17:49:49.634211   25711 status.go:330] ha-879451-m02 host status = "Stopped" (err=<nil>)
	I0327 17:49:49.634229   25711 status.go:343] host is not running, skipping remaining checks
	I0327 17:49:49.634237   25711 status.go:257] ha-879451-m02 status: &{Name:ha-879451-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 17:49:49.634260   25711 status.go:255] checking status of ha-879451-m03 ...
	I0327 17:49:49.634667   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.634730   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.653561   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0327 17:49:49.654015   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.654537   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.654562   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.654847   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.655014   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetState
	I0327 17:49:49.656477   25711 status.go:330] ha-879451-m03 host status = "Running" (err=<nil>)
	I0327 17:49:49.656490   25711 host.go:66] Checking if "ha-879451-m03" exists ...
	I0327 17:49:49.656820   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.656868   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.673068   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0327 17:49:49.673527   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.674052   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.674088   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.674418   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.674613   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetIP
	I0327 17:49:49.677509   25711 main.go:141] libmachine: (ha-879451-m03) DBG | domain ha-879451-m03 has defined MAC address 52:54:00:fc:38:98 in network mk-ha-879451
	I0327 17:49:49.677879   25711 main.go:141] libmachine: (ha-879451-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:38:98", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:46:13 +0000 UTC Type:0 Mac:52:54:00:fc:38:98 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-879451-m03 Clientid:01:52:54:00:fc:38:98}
	I0327 17:49:49.677905   25711 main.go:141] libmachine: (ha-879451-m03) DBG | domain ha-879451-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:38:98 in network mk-ha-879451
	I0327 17:49:49.678031   25711 host.go:66] Checking if "ha-879451-m03" exists ...
	I0327 17:49:49.678378   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.678426   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.692836   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38571
	I0327 17:49:49.693340   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.693959   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.693981   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.694307   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.694536   25711 main.go:141] libmachine: (ha-879451-m03) Calling .DriverName
	I0327 17:49:49.694718   25711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 17:49:49.694738   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetSSHHostname
	I0327 17:49:49.697368   25711 main.go:141] libmachine: (ha-879451-m03) DBG | domain ha-879451-m03 has defined MAC address 52:54:00:fc:38:98 in network mk-ha-879451
	I0327 17:49:49.697848   25711 main.go:141] libmachine: (ha-879451-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:38:98", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:46:13 +0000 UTC Type:0 Mac:52:54:00:fc:38:98 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-879451-m03 Clientid:01:52:54:00:fc:38:98}
	I0327 17:49:49.697881   25711 main.go:141] libmachine: (ha-879451-m03) DBG | domain ha-879451-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:fc:38:98 in network mk-ha-879451
	I0327 17:49:49.697991   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetSSHPort
	I0327 17:49:49.698141   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetSSHKeyPath
	I0327 17:49:49.698314   25711 main.go:141] libmachine: (ha-879451-m03) Calling .GetSSHUsername
	I0327 17:49:49.698492   25711 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/ha-879451-m03/id_rsa Username:docker}
	I0327 17:49:49.788540   25711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 17:49:49.809907   25711 kubeconfig.go:125] found "ha-879451" server: "https://192.168.39.254:8443"
	I0327 17:49:49.809944   25711 api_server.go:166] Checking apiserver status ...
	I0327 17:49:49.809983   25711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 17:49:49.826205   25711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0327 17:49:49.837141   25711 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 17:49:49.837218   25711 ssh_runner.go:195] Run: ls
	I0327 17:49:49.842582   25711 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0327 17:49:49.847339   25711 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0327 17:49:49.847361   25711 status.go:422] ha-879451-m03 apiserver status = Running (err=<nil>)
	I0327 17:49:49.847369   25711 status.go:257] ha-879451-m03 status: &{Name:ha-879451-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 17:49:49.847381   25711 status.go:255] checking status of ha-879451-m04 ...
	I0327 17:49:49.847665   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.847703   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.862642   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0327 17:49:49.863049   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.863509   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.863531   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.863883   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.864080   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetState
	I0327 17:49:49.865489   25711 status.go:330] ha-879451-m04 host status = "Running" (err=<nil>)
	I0327 17:49:49.865503   25711 host.go:66] Checking if "ha-879451-m04" exists ...
	I0327 17:49:49.865762   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.865792   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.880512   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0327 17:49:49.880914   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.881340   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.881358   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.881689   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.881865   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetIP
	I0327 17:49:49.884462   25711 main.go:141] libmachine: (ha-879451-m04) DBG | domain ha-879451-m04 has defined MAC address 52:54:00:a3:c4:88 in network mk-ha-879451
	I0327 17:49:49.884874   25711 main.go:141] libmachine: (ha-879451-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c4:88", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:47:31 +0000 UTC Type:0 Mac:52:54:00:a3:c4:88 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-879451-m04 Clientid:01:52:54:00:a3:c4:88}
	I0327 17:49:49.884916   25711 main.go:141] libmachine: (ha-879451-m04) DBG | domain ha-879451-m04 has defined IP address 192.168.39.195 and MAC address 52:54:00:a3:c4:88 in network mk-ha-879451
	I0327 17:49:49.885058   25711 host.go:66] Checking if "ha-879451-m04" exists ...
	I0327 17:49:49.885340   25711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 17:49:49.885375   25711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 17:49:49.900084   25711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0327 17:49:49.900486   25711 main.go:141] libmachine: () Calling .GetVersion
	I0327 17:49:49.900935   25711 main.go:141] libmachine: Using API Version  1
	I0327 17:49:49.900957   25711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 17:49:49.901261   25711 main.go:141] libmachine: () Calling .GetMachineName
	I0327 17:49:49.901448   25711 main.go:141] libmachine: (ha-879451-m04) Calling .DriverName
	I0327 17:49:49.901623   25711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 17:49:49.901640   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetSSHHostname
	I0327 17:49:49.904217   25711 main.go:141] libmachine: (ha-879451-m04) DBG | domain ha-879451-m04 has defined MAC address 52:54:00:a3:c4:88 in network mk-ha-879451
	I0327 17:49:49.904588   25711 main.go:141] libmachine: (ha-879451-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c4:88", ip: ""} in network mk-ha-879451: {Iface:virbr1 ExpiryTime:2024-03-27 18:47:31 +0000 UTC Type:0 Mac:52:54:00:a3:c4:88 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-879451-m04 Clientid:01:52:54:00:a3:c4:88}
	I0327 17:49:49.904621   25711 main.go:141] libmachine: (ha-879451-m04) DBG | domain ha-879451-m04 has defined IP address 192.168.39.195 and MAC address 52:54:00:a3:c4:88 in network mk-ha-879451
	I0327 17:49:49.904736   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetSSHPort
	I0327 17:49:49.904880   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetSSHKeyPath
	I0327 17:49:49.905027   25711 main.go:141] libmachine: (ha-879451-m04) Calling .GetSSHUsername
	I0327 17:49:49.905202   25711 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/ha-879451-m04/id_rsa Username:docker}
	I0327 17:49:49.991945   25711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 17:49:50.008374   25711 status.go:257] ha-879451-m04 status: &{Name:ha-879451-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-879451 node start m02 -v=7 --alsologtostderr: (40.681721787s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (499.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-879451 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-879451 -v=7 --alsologtostderr
E0327 17:50:51.053489   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:51:18.739063   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 17:53:53.678201   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-879451 -v=7 --alsologtostderr: (4m37.971299116s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-879451 --wait=true -v=7 --alsologtostderr
E0327 17:55:16.727122   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 17:55:51.053807   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-879451 --wait=true -v=7 --alsologtostderr: (3m41.619881433s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-879451
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (499.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 node delete m03 -v=7 --alsologtostderr
E0327 17:58:53.678044   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-879451 node delete m03 -v=7 --alsologtostderr: (6.406170484s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 stop -v=7 --alsologtostderr
E0327 18:00:51.053688   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 18:02:14.099868   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-879451 stop -v=7 --alsologtostderr: (4m36.322117189s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr: exit status 7 (113.719845ms)

                                                
                                                
-- stdout --
	ha-879451
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879451-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-879451-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 18:03:36.153873   28928 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:03:36.153982   28928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:03:36.153991   28928 out.go:304] Setting ErrFile to fd 2...
	I0327 18:03:36.153995   28928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:03:36.154150   28928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 18:03:36.154289   28928 out.go:298] Setting JSON to false
	I0327 18:03:36.154310   28928 mustload.go:65] Loading cluster: ha-879451
	I0327 18:03:36.154411   28928 notify.go:220] Checking for updates...
	I0327 18:03:36.154660   28928 config.go:182] Loaded profile config "ha-879451": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:03:36.154674   28928 status.go:255] checking status of ha-879451 ...
	I0327 18:03:36.155022   28928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:03:36.155077   28928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:03:36.175079   28928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0327 18:03:36.175513   28928 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:03:36.176012   28928 main.go:141] libmachine: Using API Version  1
	I0327 18:03:36.176032   28928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:03:36.176441   28928 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:03:36.176635   28928 main.go:141] libmachine: (ha-879451) Calling .GetState
	I0327 18:03:36.178079   28928 status.go:330] ha-879451 host status = "Stopped" (err=<nil>)
	I0327 18:03:36.178094   28928 status.go:343] host is not running, skipping remaining checks
	I0327 18:03:36.178102   28928 status.go:257] ha-879451 status: &{Name:ha-879451 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 18:03:36.178146   28928 status.go:255] checking status of ha-879451-m02 ...
	I0327 18:03:36.178541   28928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:03:36.178613   28928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:03:36.192277   28928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0327 18:03:36.192565   28928 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:03:36.192947   28928 main.go:141] libmachine: Using API Version  1
	I0327 18:03:36.192971   28928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:03:36.193246   28928 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:03:36.193410   28928 main.go:141] libmachine: (ha-879451-m02) Calling .GetState
	I0327 18:03:36.194921   28928 status.go:330] ha-879451-m02 host status = "Stopped" (err=<nil>)
	I0327 18:03:36.194934   28928 status.go:343] host is not running, skipping remaining checks
	I0327 18:03:36.194940   28928 status.go:257] ha-879451-m02 status: &{Name:ha-879451-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 18:03:36.194953   28928 status.go:255] checking status of ha-879451-m04 ...
	I0327 18:03:36.195220   28928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:03:36.195251   28928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:03:36.208629   28928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0327 18:03:36.209007   28928 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:03:36.209393   28928 main.go:141] libmachine: Using API Version  1
	I0327 18:03:36.209414   28928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:03:36.209793   28928 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:03:36.210020   28928 main.go:141] libmachine: (ha-879451-m04) Calling .GetState
	I0327 18:03:36.211656   28928 status.go:330] ha-879451-m04 host status = "Stopped" (err=<nil>)
	I0327 18:03:36.211670   28928 status.go:343] host is not running, skipping remaining checks
	I0327 18:03:36.211677   28928 status.go:257] ha-879451-m04 status: &{Name:ha-879451-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-879451 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 18:03:53.679734   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-879451 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m57.317380601s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-879451 --control-plane -v=7 --alsologtostderr
E0327 18:05:51.053918   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-879451 --control-plane -v=7 --alsologtostderr: (1m8.844030071s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-879451 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-630554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-630554 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.114903892s)
--- PASS: TestJSONOutput/start/Command (61.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-630554 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-630554 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-630554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-630554 --output=json --user=testUser: (7.355179345s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-777278 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-777278 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.60711ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b6c20bf-3caf-4cd2-acff-35bef9cdfb83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-777278] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"56510644-55d3-4a8a-8577-7fcfd895117c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"4e9d917f-aef6-4599-bdbd-b018beea0ce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1499cc35-ba1e-4359-b2fb-b76375226b38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig"}}
	{"specversion":"1.0","id":"19133d53-d93e-4b52-a8f8-81e421fd7368","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube"}}
	{"specversion":"1.0","id":"fc7fa7ee-3b6a-45b5-b060-9a16dbe4c4a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"12bbb6af-822b-4554-bcd9-bf672d7b22ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9b8f7464-e2bc-4352-b720-9345462c4061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-777278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-777278
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-498978 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-498978 --driver=kvm2  --container-runtime=containerd: (43.821185205s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-501832 --driver=kvm2  --container-runtime=containerd
E0327 18:08:53.677996   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-501832 --driver=kvm2  --container-runtime=containerd: (46.712320625s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-498978
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-501832
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-501832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-501832
helpers_test.go:175: Cleaning up "first-498978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-498978
--- PASS: TestMinikubeProfile (93.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-998444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-998444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.606870843s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-998444 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-998444 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014123 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014123 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.245762489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-998444 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-014123
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-014123: (1.76167004s)
--- PASS: TestMountStart/serial/Stop (1.76s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014123
E0327 18:10:51.053740   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014123: (22.55437938s)
--- PASS: TestMountStart/serial/RestartStopped (23.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014123 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 18:11:56.728090   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m44.904587322s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-183938 -- rollout status deployment/busybox: (3.352303941s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-9f5dx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-tmqqg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-9f5dx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-tmqqg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-9f5dx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-tmqqg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-9f5dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-9f5dx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-tmqqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183938 -- exec busybox-7fdf7869d9-tmqqg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183938 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-183938 -v 3 --alsologtostderr: (40.326736289s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-183938 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp testdata/cp-test.txt multinode-183938:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2284434739/001/cp-test_multinode-183938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938:/home/docker/cp-test.txt multinode-183938-m02:/home/docker/cp-test_multinode-183938_multinode-183938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test_multinode-183938_multinode-183938-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938:/home/docker/cp-test.txt multinode-183938-m03:/home/docker/cp-test_multinode-183938_multinode-183938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test_multinode-183938_multinode-183938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp testdata/cp-test.txt multinode-183938-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2284434739/001/cp-test_multinode-183938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m02:/home/docker/cp-test.txt multinode-183938:/home/docker/cp-test_multinode-183938-m02_multinode-183938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test_multinode-183938-m02_multinode-183938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m02:/home/docker/cp-test.txt multinode-183938-m03:/home/docker/cp-test_multinode-183938-m02_multinode-183938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test_multinode-183938-m02_multinode-183938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp testdata/cp-test.txt multinode-183938-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2284434739/001/cp-test_multinode-183938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m03:/home/docker/cp-test.txt multinode-183938:/home/docker/cp-test_multinode-183938-m03_multinode-183938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938 "sudo cat /home/docker/cp-test_multinode-183938-m03_multinode-183938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 cp multinode-183938-m03:/home/docker/cp-test.txt multinode-183938-m02:/home/docker/cp-test_multinode-183938-m03_multinode-183938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 ssh -n multinode-183938-m02 "sudo cat /home/docker/cp-test_multinode-183938-m03_multinode-183938-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-183938 node stop m03: (1.528129425s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183938 status: exit status 7 (432.084537ms)

                                                
                                                
-- stdout --
	multinode-183938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr: exit status 7 (424.944605ms)

                                                
                                                
-- stdout --
	multinode-183938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 18:13:41.119992   35607 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:13:41.120104   35607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:13:41.120113   35607 out.go:304] Setting ErrFile to fd 2...
	I0327 18:13:41.120117   35607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:13:41.120280   35607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 18:13:41.120426   35607 out.go:298] Setting JSON to false
	I0327 18:13:41.120448   35607 mustload.go:65] Loading cluster: multinode-183938
	I0327 18:13:41.120494   35607 notify.go:220] Checking for updates...
	I0327 18:13:41.120976   35607 config.go:182] Loaded profile config "multinode-183938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:13:41.120997   35607 status.go:255] checking status of multinode-183938 ...
	I0327 18:13:41.121484   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.121524   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.136216   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0327 18:13:41.136567   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.137102   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.137146   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.137562   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.137795   35607 main.go:141] libmachine: (multinode-183938) Calling .GetState
	I0327 18:13:41.139347   35607 status.go:330] multinode-183938 host status = "Running" (err=<nil>)
	I0327 18:13:41.139363   35607 host.go:66] Checking if "multinode-183938" exists ...
	I0327 18:13:41.139779   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.139826   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.153952   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0327 18:13:41.154287   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.154711   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.154743   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.155142   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.155325   35607 main.go:141] libmachine: (multinode-183938) Calling .GetIP
	I0327 18:13:41.158128   35607 main.go:141] libmachine: (multinode-183938) DBG | domain multinode-183938 has defined MAC address 52:54:00:29:1b:56 in network mk-multinode-183938
	I0327 18:13:41.158526   35607 main.go:141] libmachine: (multinode-183938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:1b:56", ip: ""} in network mk-multinode-183938: {Iface:virbr1 ExpiryTime:2024-03-27 19:11:14 +0000 UTC Type:0 Mac:52:54:00:29:1b:56 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-183938 Clientid:01:52:54:00:29:1b:56}
	I0327 18:13:41.158552   35607 main.go:141] libmachine: (multinode-183938) DBG | domain multinode-183938 has defined IP address 192.168.39.24 and MAC address 52:54:00:29:1b:56 in network mk-multinode-183938
	I0327 18:13:41.158669   35607 host.go:66] Checking if "multinode-183938" exists ...
	I0327 18:13:41.158920   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.158950   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.172678   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I0327 18:13:41.172968   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.173324   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.173346   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.173689   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.173894   35607 main.go:141] libmachine: (multinode-183938) Calling .DriverName
	I0327 18:13:41.174077   35607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 18:13:41.174104   35607 main.go:141] libmachine: (multinode-183938) Calling .GetSSHHostname
	I0327 18:13:41.176582   35607 main.go:141] libmachine: (multinode-183938) DBG | domain multinode-183938 has defined MAC address 52:54:00:29:1b:56 in network mk-multinode-183938
	I0327 18:13:41.176940   35607 main.go:141] libmachine: (multinode-183938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:1b:56", ip: ""} in network mk-multinode-183938: {Iface:virbr1 ExpiryTime:2024-03-27 19:11:14 +0000 UTC Type:0 Mac:52:54:00:29:1b:56 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-183938 Clientid:01:52:54:00:29:1b:56}
	I0327 18:13:41.176965   35607 main.go:141] libmachine: (multinode-183938) DBG | domain multinode-183938 has defined IP address 192.168.39.24 and MAC address 52:54:00:29:1b:56 in network mk-multinode-183938
	I0327 18:13:41.177067   35607 main.go:141] libmachine: (multinode-183938) Calling .GetSSHPort
	I0327 18:13:41.177225   35607 main.go:141] libmachine: (multinode-183938) Calling .GetSSHKeyPath
	I0327 18:13:41.177359   35607 main.go:141] libmachine: (multinode-183938) Calling .GetSSHUsername
	I0327 18:13:41.177487   35607 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/multinode-183938/id_rsa Username:docker}
	I0327 18:13:41.258384   35607 ssh_runner.go:195] Run: systemctl --version
	I0327 18:13:41.265198   35607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 18:13:41.282693   35607 kubeconfig.go:125] found "multinode-183938" server: "https://192.168.39.24:8443"
	I0327 18:13:41.282720   35607 api_server.go:166] Checking apiserver status ...
	I0327 18:13:41.282753   35607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 18:13:41.298008   35607 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0327 18:13:41.308718   35607 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0327 18:13:41.308765   35607 ssh_runner.go:195] Run: ls
	I0327 18:13:41.313342   35607 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0327 18:13:41.317401   35607 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0327 18:13:41.317435   35607 status.go:422] multinode-183938 apiserver status = Running (err=<nil>)
	I0327 18:13:41.317447   35607 status.go:257] multinode-183938 status: &{Name:multinode-183938 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 18:13:41.317471   35607 status.go:255] checking status of multinode-183938-m02 ...
	I0327 18:13:41.317744   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.317780   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.332209   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0327 18:13:41.332522   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.332890   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.332926   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.333255   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.333449   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetState
	I0327 18:13:41.334940   35607 status.go:330] multinode-183938-m02 host status = "Running" (err=<nil>)
	I0327 18:13:41.334956   35607 host.go:66] Checking if "multinode-183938-m02" exists ...
	I0327 18:13:41.335275   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.335311   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.349250   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0327 18:13:41.349634   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.350012   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.350032   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.350293   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.350477   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetIP
	I0327 18:13:41.352960   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | domain multinode-183938-m02 has defined MAC address 52:54:00:6e:9e:0b in network mk-multinode-183938
	I0327 18:13:41.353368   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9e:0b", ip: ""} in network mk-multinode-183938: {Iface:virbr1 ExpiryTime:2024-03-27 19:12:19 +0000 UTC Type:0 Mac:52:54:00:6e:9e:0b Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-183938-m02 Clientid:01:52:54:00:6e:9e:0b}
	I0327 18:13:41.353401   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | domain multinode-183938-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:6e:9e:0b in network mk-multinode-183938
	I0327 18:13:41.353552   35607 host.go:66] Checking if "multinode-183938-m02" exists ...
	I0327 18:13:41.353808   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.353842   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.367381   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0327 18:13:41.367736   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.368125   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.368142   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.368384   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.368564   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .DriverName
	I0327 18:13:41.368717   35607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 18:13:41.368738   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetSSHHostname
	I0327 18:13:41.371230   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | domain multinode-183938-m02 has defined MAC address 52:54:00:6e:9e:0b in network mk-multinode-183938
	I0327 18:13:41.371624   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9e:0b", ip: ""} in network mk-multinode-183938: {Iface:virbr1 ExpiryTime:2024-03-27 19:12:19 +0000 UTC Type:0 Mac:52:54:00:6e:9e:0b Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-183938-m02 Clientid:01:52:54:00:6e:9e:0b}
	I0327 18:13:41.371663   35607 main.go:141] libmachine: (multinode-183938-m02) DBG | domain multinode-183938-m02 has defined IP address 192.168.39.155 and MAC address 52:54:00:6e:9e:0b in network mk-multinode-183938
	I0327 18:13:41.371850   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetSSHPort
	I0327 18:13:41.372018   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetSSHKeyPath
	I0327 18:13:41.372177   35607 main.go:141] libmachine: (multinode-183938-m02) Calling .GetSSHUsername
	I0327 18:13:41.372309   35607 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18517-5351/.minikube/machines/multinode-183938-m02/id_rsa Username:docker}
	I0327 18:13:41.457179   35607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 18:13:41.472741   35607 status.go:257] multinode-183938-m02 status: &{Name:multinode-183938-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0327 18:13:41.472767   35607 status.go:255] checking status of multinode-183938-m03 ...
	I0327 18:13:41.473098   35607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:13:41.473139   35607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:13:41.488652   35607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43469
	I0327 18:13:41.489009   35607 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:13:41.489460   35607 main.go:141] libmachine: Using API Version  1
	I0327 18:13:41.489483   35607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:13:41.489775   35607 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:13:41.489947   35607 main.go:141] libmachine: (multinode-183938-m03) Calling .GetState
	I0327 18:13:41.491578   35607 status.go:330] multinode-183938-m03 host status = "Stopped" (err=<nil>)
	I0327 18:13:41.491605   35607 status.go:343] host is not running, skipping remaining checks
	I0327 18:13:41.491613   35607 status.go:257] multinode-183938-m03 status: &{Name:multinode-183938-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 node start m03 -v=7 --alsologtostderr
E0327 18:13:53.677655   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-183938 node start m03 -v=7 --alsologtostderr: (23.602953154s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (295.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183938
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-183938
E0327 18:15:51.053610   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-183938: (3m5.326256673s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183938 --wait=true -v=8 --alsologtostderr
E0327 18:18:53.677636   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 18:18:54.100467   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183938 --wait=true -v=8 --alsologtostderr: (1m49.7261548s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183938
--- PASS: TestMultiNode/serial/RestartKeepsNodes (295.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-183938 node delete m03: (1.629437288s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 stop
E0327 18:20:51.053295   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-183938 stop: (3m3.882698783s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183938 status: exit status 7 (92.377267ms)

                                                
                                                
-- stdout --
	multinode-183938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr: exit status 7 (93.310171ms)

                                                
                                                
-- stdout --
	multinode-183938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 18:22:07.070700   38132 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:22:07.070807   38132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:22:07.070817   38132 out.go:304] Setting ErrFile to fd 2...
	I0327 18:22:07.070821   38132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:22:07.071010   38132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 18:22:07.071169   38132 out.go:298] Setting JSON to false
	I0327 18:22:07.071194   38132 mustload.go:65] Loading cluster: multinode-183938
	I0327 18:22:07.071302   38132 notify.go:220] Checking for updates...
	I0327 18:22:07.071551   38132 config.go:182] Loaded profile config "multinode-183938": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:22:07.071565   38132 status.go:255] checking status of multinode-183938 ...
	I0327 18:22:07.071922   38132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:22:07.071982   38132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:22:07.091455   38132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0327 18:22:07.091891   38132 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:22:07.092450   38132 main.go:141] libmachine: Using API Version  1
	I0327 18:22:07.092479   38132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:22:07.092848   38132 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:22:07.093046   38132 main.go:141] libmachine: (multinode-183938) Calling .GetState
	I0327 18:22:07.094742   38132 status.go:330] multinode-183938 host status = "Stopped" (err=<nil>)
	I0327 18:22:07.094758   38132 status.go:343] host is not running, skipping remaining checks
	I0327 18:22:07.094766   38132 status.go:257] multinode-183938 status: &{Name:multinode-183938 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 18:22:07.094787   38132 status.go:255] checking status of multinode-183938-m02 ...
	I0327 18:22:07.095097   38132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0327 18:22:07.095135   38132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 18:22:07.109731   38132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0327 18:22:07.110147   38132 main.go:141] libmachine: () Calling .GetVersion
	I0327 18:22:07.110622   38132 main.go:141] libmachine: Using API Version  1
	I0327 18:22:07.110668   38132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 18:22:07.110986   38132 main.go:141] libmachine: () Calling .GetMachineName
	I0327 18:22:07.111182   38132 main.go:141] libmachine: (multinode-183938-m02) Calling .GetState
	I0327 18:22:07.112662   38132 status.go:330] multinode-183938-m02 host status = "Stopped" (err=<nil>)
	I0327 18:22:07.112677   38132 status.go:343] host is not running, skipping remaining checks
	I0327 18:22:07.112684   38132 status.go:257] multinode-183938-m02 status: &{Name:multinode-183938-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (125.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183938 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0327 18:23:53.678312   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183938 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m4.83396856s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183938 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (125.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183938
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183938-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-183938-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.592876ms)

                                                
                                                
-- stdout --
	* [multinode-183938-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-183938-m02' is duplicated with machine name 'multinode-183938-m02' in profile 'multinode-183938'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183938-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183938-m03 --driver=kvm2  --container-runtime=containerd: (47.81790495s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183938
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-183938: exit status 80 (226.070894ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-183938 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-183938-m03 already exists in multinode-183938-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-183938-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.00s)

                                                
                                    
x
+
TestPreload (346.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-577947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0327 18:25:51.054019   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-577947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m14.092227905s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-577947 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-577947 image pull gcr.io/k8s-minikube/busybox: (2.553342892s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-577947
E0327 18:28:36.729024   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 18:28:53.679290   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-577947: (1m31.740521755s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-577947 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-577947 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (57.295099571s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-577947 image list
helpers_test.go:175: Cleaning up "test-preload-577947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-577947
--- PASS: TestPreload (346.80s)

                                                
                                    
x
+
TestScheduledStopUnix (118.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-918604 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0327 18:30:51.053399   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-918604 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.302265841s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-918604 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-918604 -n scheduled-stop-918604
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-918604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-918604 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-918604 -n scheduled-stop-918604
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-918604
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-918604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-918604
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-918604: exit status 7 (72.333311ms)

                                                
                                                
-- stdout --
	scheduled-stop-918604
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-918604 -n scheduled-stop-918604
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-918604 -n scheduled-stop-918604: exit status 7 (74.645216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-918604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-918604
--- PASS: TestScheduledStopUnix (118.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (180.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2912300290 start -p running-upgrade-444755 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0327 18:35:34.101258   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2912300290 start -p running-upgrade-444755 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m49.003354523s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-444755 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-444755 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.353287109s)
helpers_test.go:175: Cleaning up "running-upgrade-444755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-444755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-444755: (1.200649046s)
--- PASS: TestRunningBinaryUpgrade (180.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (247.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0327 18:35:51.053418   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m56.410465834s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-990905
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-990905: (2.358255702s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-990905 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-990905 status --format={{.Host}}: exit status 7 (86.4845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m19.715583922s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-990905 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (104.982274ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-990905] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-990905
	    minikube start -p kubernetes-upgrade-990905 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9909052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-990905 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-990905 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (47.534543453s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-990905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-990905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-990905: (1.269455175s)
--- PASS: TestKubernetesUpgrade (247.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (101.290595ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-038145] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038145 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038145 --driver=kvm2  --container-runtime=containerd: (1m34.221589647s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-038145 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-386201 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-386201 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (112.268743ms)

                                                
                                                
-- stdout --
	* [false-386201] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 18:32:51.309166   42435 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:32:51.309517   42435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:32:51.309531   42435 out.go:304] Setting ErrFile to fd 2...
	I0327 18:32:51.309538   42435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:32:51.309840   42435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-5351/.minikube/bin
	I0327 18:32:51.310652   42435 out.go:298] Setting JSON to false
	I0327 18:32:51.311857   42435 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4505,"bootTime":1711559866,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 18:32:51.311940   42435 start.go:139] virtualization: kvm guest
	I0327 18:32:51.314202   42435 out.go:177] * [false-386201] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 18:32:51.315703   42435 notify.go:220] Checking for updates...
	I0327 18:32:51.315729   42435 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 18:32:51.317138   42435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 18:32:51.318613   42435 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-5351/kubeconfig
	I0327 18:32:51.320128   42435 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-5351/.minikube
	I0327 18:32:51.321397   42435 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 18:32:51.322737   42435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 18:32:51.324346   42435 config.go:182] Loaded profile config "NoKubernetes-038145": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:32:51.324470   42435 config.go:182] Loaded profile config "force-systemd-env-038736": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:32:51.324593   42435 config.go:182] Loaded profile config "offline-containerd-007261": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0327 18:32:51.324692   42435 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 18:32:51.359972   42435 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 18:32:51.361311   42435 start.go:297] selected driver: kvm2
	I0327 18:32:51.361329   42435 start.go:901] validating driver "kvm2" against <nil>
	I0327 18:32:51.361344   42435 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 18:32:51.363521   42435 out.go:177] 
	W0327 18:32:51.364670   42435 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0327 18:32:51.365808   42435 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-386201 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-386201

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386201"

                                                
                                                
----------------------- debugLogs end: false-386201 [took: 2.895890223s] --------------------------------
helpers_test.go:175: Cleaning up "false-386201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-386201
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (77.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m16.338952117s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-038145 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-038145 status -o json: exit status 2 (275.452937ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-038145","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-038145
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (77.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038145 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.633722154s)
--- PASS: TestNoKubernetes/serial/Start (28.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-038145 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-038145 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.8079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-038145
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-038145: (1.434108656s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (77.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038145 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038145 --driver=kvm2  --container-runtime=containerd: (1m17.759342812s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (77.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-038145 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-038145 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.068358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (158.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1570196563 start -p stopped-upgrade-741030 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1570196563 start -p stopped-upgrade-741030 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (58.988795352s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1570196563 -p stopped-upgrade-741030 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1570196563 -p stopped-upgrade-741030 stop: (2.144455684s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-741030 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-741030 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.175766497s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (158.31s)

                                                
                                    
x
+
TestPause/serial/Start (65.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-435476 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-435476 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m5.863962017s)
--- PASS: TestPause/serial/Start (65.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0327 18:38:53.678015   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m6.873539325s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-435476 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-435476 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m2.161093134s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (62.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m11.945114194s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-741030
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-741030: (1.131972004s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m38.428549421s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-435476 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-435476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-435476 --output=json --layout=cluster: exit status 2 (280.8084ms)

                                                
                                                
-- stdout --
	{"Name":"pause-435476","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-435476","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-435476 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-435476 --alsologtostderr -v=5: (1.076119758s)
--- PASS: TestPause/serial/Unpause (1.08s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.31s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-435476 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-435476 --alsologtostderr -v=5: (1.308166237s)
--- PASS: TestPause/serial/PauseAgain (1.31s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-435476 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-435476 --alsologtostderr -v=5: (1.05776904s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0327 18:40:51.054299   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m43.110215543s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2n7sg" [83787cfa-bc6d-4f95-8677-09721c31dcca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2n7sg" [83787cfa-bc6d-4f95-8677-09721c31dcca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004922695s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-87kjd" [e71bdc0d-f431-4cad-8691-e1fe898ae9fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005760778s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5hr8k" [83e77694-9c89-4a3f-9e03-deef4ba352a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5hr8k" [83e77694-9c89-4a3f-9e03-deef4ba352a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004997075s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (110.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m50.264313918s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (110.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m49.19539304s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d6rdg" [509257cf-7de1-45a2-b916-70eb110b5c6a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.196188297s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xxvp2" [1e63da1e-e57f-40a6-a942-d7679b17afd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xxvp2" [1e63da1e-e57f-40a6-a942-d7679b17afd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006135568s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vdncm" [73632fd1-1e39-4ce5-8451-a1564e16cd66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vdncm" [73632fd1-1e39-4ce5-8451-a1564e16cd66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005379131s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-386201 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m36.96823184s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (196.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-947582 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-947582 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m16.861097729s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (196.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x8t2b" [7480affe-d19e-4105-badf-101136231bde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x8t2b" [7480affe-d19e-4105-badf-101136231bde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00467142s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v847x" [39d1be03-8272-4d24-9d22-188bc778bb63] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005331448s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cm76d" [4211f5a3-088d-409d-838e-89ebcb573bc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cm76d" [4211f5a3-088d-409d-838e-89ebcb573bc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004344027s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (171.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-882455 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-882455 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (2m51.674589629s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (171.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (120.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-994378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-994378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (2m0.314646503s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (120.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-386201 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-386201 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-75xcw" [e5340eb9-80b6-4b33-857b-792630dfa102] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-75xcw" [e5340eb9-80b6-4b33-857b-792630dfa102] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005330725s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-386201 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-386201 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-924931 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 18:45:16.729999   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 18:45:51.053299   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 18:45:55.271992   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.277264   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.287545   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.307810   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.348117   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.428715   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.589118   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:55.909595   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:56.550472   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:45:57.830624   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-924931 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m51.73880381s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-947582 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [942d4ab0-a88e-4c1b-a670-078c57b1052c] Pending
helpers_test.go:344: "busybox" [942d4ab0-a88e-4c1b-a670-078c57b1052c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [942d4ab0-a88e-4c1b-a670-078c57b1052c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004189043s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-947582 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-994378 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9ce70907-f3b7-463b-911b-6d98d04d9421] Pending
E0327 18:46:00.391502   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9ce70907-f3b7-463b-911b-6d98d04d9421] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0327 18:46:01.509696   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.515017   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.525365   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.545735   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.586064   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.666826   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:01.827281   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:02.148408   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:02.788988   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9ce70907-f3b7-463b-911b-6d98d04d9421] Running
E0327 18:46:04.069406   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:05.511862   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:46:06.629968   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004373261s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-994378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-947582 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-947582 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.218414016s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-947582 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-994378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-994378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110943353s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-994378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-947582 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-947582 --alsologtostderr -v=3: (1m32.502134519s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-994378 --alsologtostderr -v=3
E0327 18:46:11.750825   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:46:15.752286   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-994378 --alsologtostderr -v=3: (1m32.476116346s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-924931 create -f testdata/busybox.yaml
E0327 18:46:21.991341   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94c2e76e-7071-407e-8816-f18e3224961b] Pending
helpers_test.go:344: "busybox" [94c2e76e-7071-407e-8816-f18e3224961b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [94c2e76e-7071-407e-8816-f18e3224961b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004362296s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-924931 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-924931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-924931 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-924931 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-924931 --alsologtostderr -v=3: (1m32.468562675s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-882455 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7fddd548-a5f6-4bb1-88ea-757ad8179f20] Pending
helpers_test.go:344: "busybox" [7fddd548-a5f6-4bb1-88ea-757ad8179f20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0327 18:46:36.233270   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7fddd548-a5f6-4bb1-88ea-757ad8179f20] Running
E0327 18:46:42.472218   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00466374s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-882455 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-882455 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-882455 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-882455 --alsologtostderr -v=3
E0327 18:46:52.161525   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.166779   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.176996   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.197237   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.237493   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.317832   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.478349   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:52.799503   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:53.439762   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:54.720122   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:46:57.280438   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:47:02.400847   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:47:12.641529   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:47:13.135438   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.140670   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.150884   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.171117   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.211346   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.291861   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.452243   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:13.773223   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:14.413462   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:15.693623   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:17.194234   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:47:18.254333   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:23.374887   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:47:23.433119   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:47:33.122476   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:47:33.615534   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-882455 --alsologtostderr -v=3: (1m31.820832506s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947582 -n old-k8s-version-947582
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947582 -n old-k8s-version-947582: exit status 7 (74.905562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-947582 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994378 -n embed-certs-994378
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994378 -n embed-certs-994378: exit status 7 (77.706642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-994378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (445.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-947582 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-947582 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (7m25.069859788s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947582 -n old-k8s-version-947582
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (445.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-994378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 18:47:54.096500   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-994378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m35.831894882s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994378 -n embed-certs-994378
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931: exit status 7 (91.49512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-924931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-924931 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0327 18:48:13.790269   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:13.795601   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:13.805888   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:13.826186   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:13.866494   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:13.946883   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:14.082803   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:48:14.107016   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:14.428107   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:15.069100   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-924931 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m36.330289239s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
E0327 18:53:41.474581   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-882455 -n no-preload-882455
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-882455 -n no-preload-882455: exit status 7 (85.847428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-882455 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0327 18:48:16.349990   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (353.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-882455 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0327 18:48:18.911071   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:24.031839   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:24.997602   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.002899   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.013224   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.033451   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.073788   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.154593   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.315394   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:25.636137   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:26.276627   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:27.556762   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:30.117762   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:34.272943   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:48:35.057654   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:48:35.238639   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:39.114665   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:48:45.353489   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:48:45.479771   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:48:53.678575   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
E0327 18:48:54.753641   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:49:04.006287   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.011571   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.021820   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.042122   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.082437   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.162759   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.323181   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:04.643477   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:05.283623   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:05.960386   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:49:06.564450   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:09.124566   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:14.244790   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:24.485700   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:35.714255   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:49:36.003707   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:49:44.966666   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:49:46.921341   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:49:56.978795   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:50:25.927555   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:50:51.053888   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 18:50:55.272216   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:50:57.634457   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
E0327 18:51:01.509656   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:51:08.841538   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:51:22.955807   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/auto-386201/client.crt: no such file or directory
E0327 18:51:29.193925   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/kindnet-386201/client.crt: no such file or directory
E0327 18:51:47.847779   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
E0327 18:51:52.161597   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:52:13.136162   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:52:14.102246   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/functional-351970/client.crt: no such file or directory
E0327 18:52:19.844874   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/calico-386201/client.crt: no such file or directory
E0327 18:52:40.819686   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/custom-flannel-386201/client.crt: no such file or directory
E0327 18:53:13.790359   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/enable-default-cni-386201/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-882455 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (5m52.790342385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-882455 -n no-preload-882455
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (353.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmf2s" [4670aadb-8545-4f4e-b849-0c1800d44c2e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0327 18:53:24.997616   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmf2s" [4670aadb-8545-4f4e-b849-0c1800d44c2e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004388112s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmf2s" [4670aadb-8545-4f4e-b849-0c1800d44c2e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005764193s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-994378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-994378 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-994378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-994378 --alsologtostderr -v=1: (1.11632268s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994378 -n embed-certs-994378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994378 -n embed-certs-994378: exit status 2 (299.190685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994378 -n embed-certs-994378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994378 -n embed-certs-994378: exit status 2 (309.63181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-994378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994378 -n embed-certs-994378
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994378 -n embed-certs-994378
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gtgkc" [0c26b21f-6422-4287-9e61-0642cad8661c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gtgkc" [0c26b21f-6422-4287-9e61-0642cad8661c] Running
E0327 18:53:52.682370   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/flannel-386201/client.crt: no such file or directory
E0327 18:53:53.678201   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/addons-295637/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.012081352s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-085682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-085682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m0.305768603s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gtgkc" [0c26b21f-6422-4287-9e61-0642cad8661c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004961666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-924931 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-924931 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-924931 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931: exit status 2 (267.868557ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931: exit status 2 (264.822218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-924931 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-924931 -n default-k8s-diff-port-924931
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s8pkf" [f2984329-f52d-42bb-ac72-679e48aa17ff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s8pkf" [f2984329-f52d-42bb-ac72-679e48aa17ff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005590494s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s8pkf" [f2984329-f52d-42bb-ac72-679e48aa17ff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00487638s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-882455 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-882455 image list --format=json
E0327 18:54:31.688459   12617 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-5351/.minikube/profiles/bridge-386201/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-882455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-882455 -n no-preload-882455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-882455 -n no-preload-882455: exit status 2 (265.83613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-882455 -n no-preload-882455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-882455 -n no-preload-882455: exit status 2 (252.590267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-882455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-882455 -n no-preload-882455
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-882455 -n no-preload-882455
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-085682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-085682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014540178s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-085682 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-085682 --alsologtostderr -v=3: (2.353145461s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085682 -n newest-cni-085682
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085682 -n newest-cni-085682: exit status 7 (74.528679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-085682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-085682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-085682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (39.200505991s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-085682 -n newest-cni-085682
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rs8qf" [8533fb8b-13bd-4d9a-ab81-b7dd537cdaa0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004627863s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rs8qf" [8533fb8b-13bd-4d9a-ab81-b7dd537cdaa0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004701213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-947582 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-947582 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-947582 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947582 -n old-k8s-version-947582
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947582 -n old-k8s-version-947582: exit status 2 (256.013827ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-947582 -n old-k8s-version-947582
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-947582 -n old-k8s-version-947582: exit status 2 (268.328274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-947582 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947582 -n old-k8s-version-947582
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-947582 -n old-k8s-version-947582
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-085682 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-085682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085682 -n newest-cni-085682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085682 -n newest-cni-085682: exit status 2 (248.419688ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085682 -n newest-cni-085682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085682 -n newest-cni-085682: exit status 2 (252.312377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-085682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-085682 -n newest-cni-085682
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-085682 -n newest-cni-085682
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.11
271 TestNetworkPlugins/group/cilium 3.44
286 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-386201 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-386201

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386201"

                                                
                                                
----------------------- debugLogs end: kubenet-386201 [took: 2.964408227s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-386201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-386201
--- SKIP: TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-386201 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-386201" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-386201

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-386201" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386201"

                                                
                                                
----------------------- debugLogs end: cilium-386201 [took: 3.293185556s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-386201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-386201
--- SKIP: TestNetworkPlugins/group/cilium (3.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-871221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-871221
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard